Ordina Open Wereld 2014

Werelds grootste IT conferentie, Oracle OpenWorld/JavaOne in San Francisco is voorbij. Meerdere Ordina mensen zijn…

Pattern Matching (MATCH_RECOGNIZE) in Oracle Database 12c

I’ve spent the last couple of evenings playing with the new SQL pattern matching feature in Oracle 12c.

I’m doing some sessions on analytic functions in some upcoming conferences and I thought I should look at this stuff. I’m not really going to include much, if anything, about it as my sessions are focussed on beginners and I don’t really want to scare people off. The idea is to ease people in gently, then let them scare themselves once they are hooked on analytics. :) I’m thinking about Hooked on Monkey Fonics now…

At first glance the pattern matching seems pretty scary. There are a lot of options and as soon as you throw regular expressions into the mix it does make your head swim a little. After a couple of half-baked attempts, where I found convenient excuses to give in when the going got tough, I finally sat down and plugged through the docs. If you actually RTFM it is a lot easier than hoping to wing it. Who’da thunk it? :)

I’ve tried to keep the article really light. The docs are pretty good for this stuff (if you read them) and they have a lot of examples. I started adding more and more detail to the article, then chopped most of it out. There is no point regurgitating all the options when it is in the docs. Most of the examples I’ve seen before just talk about basic patterns, like V and W shapes, but it’s quite simple to do complicated stuff once you start playing. In fact it takes more time to set up the example data than it does to figure out the queries to bring it back.

In the near future I will be copy/pasting examples and adjusting them or just sitting with my article and the docs when trying to use this stuff. I think it’s going to take a long time before I can type this stuff from memory. Partly that’s because I can’t see myself having lots of cause to use it. I can’t think of any scenarios I’ve experienced where this would have been a natural fit. Having said that, I’ve never worked in things like stock markets, betting and stuff like that where I can imagine this sort of pattern matching is needed all the time. I seem to remember one person at a conference, who shall remain nameless, saying this feature was one of their drivers for upgrading to 12c. I wonder if that was for real or an exaggeration?

Anyway, if you need this sort of analysis, I think it’s worth checking out, but try to remember it’s not as scary as it first looks. :)



Pattern Matching (MATCH_RECOGNIZE) in Oracle Database 12c was first posted on October 23, 2014 at 6:45 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Deploying a Private Cloud at Home — Part 6

Today’s blog post is part six of seven in a series dedicated to Deploying a Private Cloud at Home, where I will be demonstrating how to configure controller node with legacy networking ad OpenStack dashboard for webgui. Feel free to check out part five where we configured compute node with OpenStack services.

  1. First load the admin variables admin-openrc.sh
    source /root/admin-openrc.sh
  2. Enable legacy networking
    openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.api.API
    openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api nova
  3. Restart the Compute services
    service openstack-nova-api restart
    service openstack-nova-scheduler restart
    service openstack-nova-conductor restart
  4. Create the IP pool which will be assigned to the instances we will launch later. My network is I took a subpool of the range and I am using that subnet to assign IPs to the VMs. As the VMs will be on my shared network I want the ip in the same range my other systems on the network.
    Here I am using the subnet of
  5. Create a network
    nova network-create vmnet --bridge br0 --multi-host T --fixed-range-v4
  6. Verify networking by listing the network
    nova net-list
  7. Install dashboard. Dashboard gives you webui to manage OpenStack instances and services. As we will be using the default configuration I am not going in detail with this.
    yum install -y mod_wsgi openstack-dashboard
  8. Update the ALLOWED_HOSTS in local_settings to include the addresses you wish to access the dashboard from. I am running these in my Intranet so I allowed every host in my network. But you can specify which hosts you want to give access.
    ALLOWED_HOSTS = ['*']
  9. Start and enable Apache web server
    service httpd start
    chkconfig httpd on
  10. You can now access the dashboard at http://controller/dashboard


This completes the configuration of OpenStack private cloud. We can use the same guide for RackSpace private cloud as it too is based on OpenStack Icehouse, but that is for another time.

Now that we have a working PaaS cloud, we can configure any SaaS on top of it, but that will require another series altogether.

Stay tuned for part seven, our final post in the series Deploying Private Cloud at Home, where I will be sharing scripts that will automate the installation and configuration of controller and compute nodes.

The APPROX_COUNT_DISTINCT Function – A Test Case

The aim of this post is not to explain how the APPROX_COUNT_DISTINCT function works (you find basic information in the documentation and in this post written by Luca Canali), but to show you the results of a test case I run to assess how well it works.

Here’s what I did…

I created a table with several numerical columns (the name of the column shows how many distinct values it contains), loaded 100 million rows into it (the size of the segment is 12.7 GB), and gathered the object statistics.

  2  AS
  3  WITH
  4    t1000 AS (SELECT /*+ materialize */ rownum AS n
  5              FROM dual
  6              CONNECT BY level <= 1E3)
  7  SELECT rownum AS id,
  8         mod(rownum,2) AS n_2,
  9         mod(rownum,4) AS n_4,
 10         mod(rownum,8) AS n_8,
 11         mod(rownum,16) AS n_16,
 12         mod(rownum,32) AS n_32,
 13         mod(rownum,64) AS n_64,
 14         mod(rownum,128) AS n_128,
 15         mod(rownum,256) AS n_256,
 16         mod(rownum,512) AS n_512,
 17         mod(rownum,1024) AS n_1024,
 18         mod(rownum,2048) AS n_2048,
 19         mod(rownum,4096) AS n_4096,
 20         mod(rownum,8192) AS n_8192,
 21         mod(rownum,16384) AS n_16384,
 22         mod(rownum,32768) AS n_32768,
 23         mod(rownum,65536) AS n_65536,
 24         mod(rownum,131072) AS n_131072,
 25         mod(rownum,262144) AS n_262144,
 26         mod(rownum,524288) AS n_524288,
 27         mod(rownum,1048576) AS n_1048576,
 28         mod(rownum,2097152) AS n_2097152,
 29         mod(rownum,4194304) AS n_4194304,
 30         mod(rownum,8388608) AS n_8388608,
 31         mod(rownum,16777216) AS n_16777216
 32  FROM t1000, t1000, t1000
 33  WHERE rownum <= 1E8;

SQL> execute dbms_stats.gather_table_stats(user,'T')

Then, for every column, I ran two queries and measured the elapsed time, the maximum amount of PGA used by the query, and the precision of the result. Note that the test case was designed to avoid the usage of a temporary segment. In other words, all data required for the aggregation was stored into the PGA. As a result, the processing was CPU bound.

SELECT /*+ no_parallel */ count(DISTINCT n_2) FROM t
SELECT /*+ no_parallel */ approx_count_distinct(n_2) FROM t

Let’s have a look to three charts summarizing how well the APPROX_COUNT_DISTINCT function works:

  • The first chart shows that for both functions the elapsed time depends on the number of distinct values. That said, while for the APPROX_COUNT_DISTINCT function the impact of the number of distinct values is minimal (the elapsed time goes from 5.8 up to 7.1 seconds), for the COUNT function it’s much higher (the elapsed time goes from 10.5 up to 35.2 seconds).

Elapsed Time

  • The second chart (be careful that the y-axis scale is logarithmic) shows that only for the COUNT function the maximum amount of PGA depends on the number of distinct values (from 10MB up to 807MB). In fact, for the APPROX_COUNT_DISTINCT function, it was exactly the same (10MB) in all situations.

Elapsed Time

  • The third chart shows that the precision of the APPROX_COUNT_DISTINCT function stays between -4% and +4%.

Elapsed Time

According to this test case, in my opinion, the performance and accuracy of the APPROX_COUNT_DISTINCT function with numerical values (I still have to test other data types) are good. Hence, I see no reason for not using it when an estimate the number of distinct values is enough.

A Q&A with Australia’s Mobile Luminary Andrew Paterson

Looking for an insight into what makes an award winning mobile solution amongst Oracle's customers? The following Q&A exert captures an online interview with Andrew Paterson from Rubicon Red, sharing his experiences in building a mobile solution for Australia's National Pharmacies that won the Enterprise Mobility Award at Oracle Open World 2014. The application was built using Oracle technology, a combination of Oracle's Mobile Application Framework (MAF) for the mobile front end, and Oracle's SOA Suite of products at the back end.

Hi Andrew, I had the pleasure of meeting you at Oracle Open World, but for our audience could you please introduce yourself, your role and organisation you work for.

Andrew: Hi, my name is Andrew Paterson and I work for Rubicon Red – a company based out of Australia that provides products and services relating to Oracle Fusion Middleware. My role is Practice Manager, which involves a mixture of line and resource management, pre-sales and consulting work for clients.  

Mobility is my main focus area at the moment, as we are increasingly finding that it is something customers now see as being core to their business.

What was your background before working on mobile solutions?

Andrew: I have quite a varied background, having worked previously with Ada, Lotus Notes, Java and building Web and Portal applications. The past 7-8 years has been focused on SOA and BPM, through development and consulting.

My interest has always been in delivering solutions that make the users' lives easier. Typically with SOA this is abstracting away the complexities of the underlying systems and then exposing services that can be easily consumed.  

As a developer, what’s changed since you took up the “mobile first” mantra?

Andrew: The key phrase would be “build for change” - Mobility innovation and trends move a lot faster than traditional applications or web. For example, in the timeframe of the current application build (last 6 months) we have seen the Samsung S5 and Apple iPhone 6 devices launched, along with a new iOS and Android Lollipop being released this Friday. Coupled with these are changes to the UI styles and expected patterns of operations. 

The challenge is to keep up with these trends. This was one of the main reasons we picked Oracle's Mobile Application Framework. We don’t have to worry about maintaining the underlying structures to keep apace with these changes, MAF takes care of this for us. Our focus is instead on what we can do differently within the app and whether we need to evolve the UI.

The next noticeable change is that mobility has changed the way customers view SOA. Previously it could be a hard sell and often it was typically only used to replace existing integrations. The benefits of SOA as an approach were often lost and it was hard for them to see the value in building re-usable services and having a service catalogue. Now, it is impossible for a customer to build mobile applications without having a services layer. A well-structured and performant API is critical for being able to build applications that work well for consumers. 

A minor but important change is that the presentation layer infrastructure is now something we don’t need to provide and maintain. The app runs on the users device, saving both money and time.

Finally, it is crucial to now see the application as being something you are in for the long term, with multiple and frequent development cycles to add or improve features. The nature of the interaction with the user is more personal and direct, so whatever is built needs to evolve. Users expect this change – look at any app on your phone that hasn’t updated in the past year or so and it probably looks dated and clunky.

Could you give us an outline of what Oracle Mobile Application Framework applications you’ve built or been responsible for?

Andrew: Prior to MAF we had worked with various customers to build ADF Mobile [Editor's note: ADF Mobile was the precursor to MAF] applications, mainly for sales order entry and purchase order approval (typically moving these functions out of Oracle ERPs). These came about through the use of SOA and BPM to build services, which were then easily consumable via a mobile front-end.

With our latest application, we are using MAF and working on a consumer application for a membership based pharmacy organisation. This app will deliver a new channel of interaction for their members, to view prescriptions, membership details and to interact with the company. Providing the app will also mean members no longer need to carry a plastic card to prove membership.

Can you outline the size of your teams, your apps, the timelines involved?

Andrew: The current project consists of a solution architect/mobile developer to design and build the front-end application and a SOA developer for building back end services in Oracle API Gateway and SOA Suite. The customer’s marketing team are responsible for providing a style guide to build the application to. 

The breakdown of work is around 80 days, with 40-50 of these being the work required to expose services that interact with the internal systems. 

That's interesting, so more than 50% of the work was with the back end services rather than just the mobile UI front end. Was that your experience in the earlier mobile apps you built too?

Andrew: It was probably similar percentages, although we’ve found MAF easier and quicker to build apps compared to ADF Mobile. Some of the tool improvements and framework changes have reduced the development time considerably. For customers who have been on the SOA journey, it should be relatively simple to build an app that utilises their existing services.    

Arguably a mobile app can just be a reflection of an existing desktop or browser based system. Can you talk to the qualities of your app that makes it “mobile”, in other words what’s unique to the solution and what benefits did it bring over the original systems?

Andrew: Whilst some of the features in the application are available on the website, the big change is that it provides a channel for direct interaction between the member and the organisation and adds features more related to the ‘now’ moment. For example, members will be able to receive notifications that their prescriptions are ready.  

Also, whilst it isn’t being leveraged at the moment, we will be able to detect that a member is in-store and this opens up scope to provide spot/local promotions and to improve the interaction between the member and the shop staff.

What advice can you give for anyone wanting to start their first Oracle Mobile Application Framework application?

Andrew: Don’t be afraid! The framework is relatively easy to understand and you can quickly build a nice looking app that exposes enterprise data from multiple sources. Consider your use cases – a big one for us was to be able to work ‘offline’ – i.e. the user shouldn’t need to login to view some of their data. Look for common issues faced within the organisation – for example, what are the top five reasons that users call your customer services department and how could these be alleviated by exposing a self-service application?

The framework also provides a lot of functionality out of the box, so make sure you look through the examples provided to see what is possible. Also take a trip to the MAF tutorial videos as these provide a really good introduction to the overall framework.

We’ve found that using an actual device for development and testing was easier than using the emulators. Getting this set up and working also helps you gain an understanding of the tools available and how various components work. 

As mentioned before – build for change. Always have in mind how you will add new features and functions and the impact these will have on the app you are designing.  

Finally – UI style changes are frequent so do not focus too much effort on a pixel perfect rendering. Build something that looks nice but understand it will likely need to change within a year. The critical aspect is the functionality – users are unforgiving of errors and poor performance.

What qualities do you see an “enterprise” mobile apps having over consumer apps? Do you bother to differentiate them?

Andrew: I see there being subtle differences between an app built for the enterprise and one for consumers. An enterprise app is likely to expose more interactive features (i.e. create/update/delete of data), whereas consumer focused apps will typically show the status of data from within various systems. 

This means that the enterprise app will likely need more consideration to different levels of access and security roles. As you are exposing functions that change corporate data, you’ll likely need to hide/show things like fields and buttons accordingly.

However, outside of that there probably isn’t much to differentiate. It’s tempting to say that an enterprise app may not need as frequent UI changes, but I’m not sure that is true anymore!

When gathering requirements for mobile applications from your customers/users, was there anything that took you by surprise?

Andrew: Typically, when building web or BPM apps it can be difficult to get business users to explain what they require, as often they are not aware of the functionality that is available or what is possible. Mobility is a reverse of this – people know quite well what a mobile device can or can’t do, the features available and have an expectation on how things should work. Design becomes a lot easier as everyone is sharing a common terminology and understanding of what and how things should work.

Another noticeable effect was there were a lot of creative ideas proposed around what the app could provide. Whilst some of these have been put to one side for the initial launch of the app, they have been useful for providing an understanding how we should construct the app to cater for these future enhancements.   

In developing a mobile solution for your customers, what advice can you give them in considering mobile applications for the first time?

Andrew: Be clear about why you are launching an app and the features and functions that it will provide. What is the intent and why would someone download and keep the app? Having decided this, break the features into bodies of work that can be completed in phases no longer than a couple of months each. If it takes you longer than a couple of months to get a release out (for a consumer focused app), then you’re likely to fall behind the market due to amount of innovation and change that is occurring. 

Related to this is don’t fall at the first hurdle through ‘analysis paralysis’. If a feature is too complex to decide upon now, then roll it into the next release. The key thing is to get an app out there that you can expand upon. If you’re not sure what features the users want – then ask them through a survey or similar.

If we focus on Oracle’s Mobile Application Framework, if you were to pitch the positives to a new customer, what would you cover?

Andrew: The key point to pitch is that it provides a fully featured framework to build an app that can then be deployed to both iOS and Android. This greatly speeds up development and reduces costs, as the developer doesn���t need to focus on writing Android or Objective C page structures and navigation.

The approach for data controls and pages is based around ADF and is a proven way to build applications. The ease of wiring in services means that it can be very quick to build an application that interacts with your existing systems. For customers with Oracle systems you have that reassurance it has been built to work seamlessly with these.

For a developer it’s a relatively simple MVC pattern that is easy to understand and use. The fact it is built on open standards and can embed alternate JavaScript libraries or use Cordova means you are not restricted in what you want to do. 

Finally, it’s worth re-iterating that the product is constantly evolving and adding new features. The ability to get these for ‘free’ without needing to invest time to build yourself cannot be understated.

And to be fair, let’s cover the negatives too?

Andrew: At times we have hit some quirky device related issues that have been hard to resolve. Mostly MAF protects you from these issues, but this is compounded due to the variety of devices that exist (particularly for Android). Often a bug is with a particular device or version of phone operating system, so this can be frustrating (especially when trying to explain how to recreate it and the person can’t).

Documentation can be sparse at times as well or at least, examples that cover more complex use cases. There are times where parameters or features are skipped over as though it is obvious how it works. However just like MAF, Oracle is rapidly evolving the documentation set too so this will improve over time.

Do you see any unique challenges in the Australian mobile market?

Andrew: Australia is quite unique due to the proliferation of people that have smart phones. I heard a statistic the other day that the top 2 providers have 29 million mobile contracts between them – more than the population that exists today.

Another consideration for Australia is that once you get out of the cities the connectivity can drop off quite significantly. This needs to be factored in when building your app – what is needed to keep the app running in this situations.

Any predictions for enterprise mobility in 2015 and 2020?

Andrew: 2015 is probably easier – the tools will evolve to support multi tenancy on the device. i.e. a work profile and a personal profile. Some of this exists today, but again it will get easier to manage as the tooling gets better.

2020 – who knows, better interactivity with our rocket cars :-) I have a HTC Desire that was released around about 5 years ago and the difference in form and function is shocking. It seems barely usable and horrendously slow. I think the big change will be towards ease of access and ‘always on’ connectivity to enterprise data.  

Thanks very much for your time Andrew, and sharing your experiences and expert insights into the mobility market.  Best of luck with your next mobile application!

Rubicon Red, an Australian Oracle Gold partner is an innovative IT professional services firm focused on enabling enterprise agility and operational excellence through the adoption of emerging technologies such as Service-Oriented Architecture (SOA), Business Process Management (BPM), Cloud Computing and Mobile solutions.

Oracle ACE = Oracle’s Bitch?

I got a comment today on my recent Oracle fanboy post, which I thought was very interesting and worthy of a blog post in reply. The commenter started by criticising the Oracle license and support costs (we’ve all had that complaint) as well as the quality of support (yes, I’ve been there too), but that wasn’t the thing that stood out. The final paragraph was as follows…

“One addition. I know you, your past work and you are very brainy person but since last couple of years you became Oracle doctrine submissive person just like most of the rest of ACE Directors. When you were just ACEs, you were more trustworthy than now and you weren’t just Oracle interpreters… And unfortunately I’m not the only person with this opinion, but probably I’m only one who is not affraid to make it public.”

I think that’s a really interesting point and one that I feel compelled to write about…

Let me start by saying I don’t believe this comment was in any way directed at the main body of my website. The articles there have always been “how-to” style articles and typically don’t contain much in the way of opinions about the functionality. I’ve always tried to keep facts in the articles and opinions and random junk on the blog. With that distinction in place, let’s talk about my blog…

When I first joined the Oracle ACE Program in 2006 I was very concious of what *I thought it meant* about what I could and couldn’t say. On the one hand I didn’t want to piss off Oracle as I was very proud of my little ACE badge, but I also didn’t want to be considered Oracle’s Bitch. I quickly learned a couple of things:

  • You are selected for what you are currently doing in the community. If you just keep doing what you do, life will be good. If you spend your whole time slagging off Oracle, you probably won’t get invited on to the program in the first place. If over time you turn into a complete hater, you will probably be asked to leave the program. I guess that’s pretty obvious and true of any evangelism program out there. Does that mean you can’t ever criticise Oracle? Hell no! Instead, I think it makes it your obligation to give constructive criticism whenever possible. One of the things we are encouraged to do is to make stronger links with the product managers so we can give more feedback to help improve the products. If you witnessed the amount of moaning and complaints that get fired at some of the Oracle teams during the ACE Director briefings, you would have no doubts about this. :)
  • The value of the Oracle ACE Program to Oracle is that it is made up of “real” people who think Oracle is cool enough to spend their own time talking about it. If the Oracle ACE Program becomes a collection of yes-men and yes-women, then they might as well send a bunch of sales people to every conference. Oracle have (so far), never complained or tried to veto anything I’ve said in any presentation, blog post or article.

So have I become one of Oracle’s bitches over the last few years? Well, I’ve been an ACE since 1st April 2006 (yes, April fool’s day) and I’ve been an ACE Director since some time in 2007 or 2008. I can’t really remember to be honest, but let’s say for the sake of argument it’s been 6 years as an ACED. If it was becoming an ACED that made me an “Oracle doctrine submissive person” in the last couple of years, it must have taken Oracle four years of work to make me that way. :)

I don’t believe I alter my beliefs to fit any criteria, but I guess it is really difficult to be subjective about yourself and I would be very interested to know what other people think about this. If I think about some common topics of discussion over the last few years where I don’t fall “on message”, they would probably be:

  • I believe Oracle is too expensive.
  • I believe the diagnostics and tuning pack should be part of the base product and available in all editions for free.
  • I believe anything to do with security should be part of the base product and available in all editions for free.
  • I don’t agree with the pricing of data guard standby nodes that are only used for managed recovery. If they are opened for use (read-only, active DG or snapshot standby) I can see why Oracle would want to charge.
  • Although I love the functionality of Cloud Control, I think the implementation is suffering from really bad bloat. It also exhibits some irregularities when different teams work on different aspects of the same functionality, as I discussed here.
  • I am a fan of certification from the perspective of personal development, but I don’t think the piece of paper is worth anything in itself. I’ve written about this here. Having said that, I do agree with the recent re-certification thing.

I’ve just had a look through my posts over the last year and if anything, I would say I’m promoting KeePass and MobaXterm more than Oracle. :) I know I get a little gushy about the ACE Program during conference write ups, and maybe that annoys people a bit, but I just can’t see that I’ve become a total drone… (Denial is not just a river in Africa?)

Anyway, I have two things to say in closing:

  • To people in the Oracle ACE program : If you are worried about what you should and shouldn’t say, my advice is try to be as honest as possible. If the people in the community lose faith in the members of the program, then it is worth nothing!
  • To people in the community : If you honestly believe you see a change in behaviour when someone joins the program you should call them out on it. I would suggest you do this in private and give some examples of situations that give you concern. If they are “the type of people the program needs”, they should be concerned about this also!



PS. For those that feel the need to, please don’t wade in with comments in my defence as I don’t think this is either necessary or helpful. I think the person in question had a genuine concern and quite frankly that makes it a concern of mine also…

Oracle ACE = Oracle’s Bitch? was first posted on October 22, 2014 at 11:02 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.


A recent question on the OTN forum asked about narrowing down the cause of deadlocks, and this prompted me to set up a little example. Here’s a deadlock graph of a not-quite-standard type:

Deadlock graph:
                                          ---------Blocker(s)--------  ---------Waiter(s)---------
Resource Name                             process session holds waits  process session holds waits
TX-00040001-000008EC-00000000-00000000         50     249     X             48       9           X
TX-000A001F-000008BC-00000000-00000000         48       9     X             50     249           S

My session (the one that dumped the trace file) is 249, and I was blocked by session 9. The slight anomaly, of course, is that I was waiting on a TX lock in mode 4 (Share) rather than the more common mode 6 (eXclusive).

There are plenty of notes on the web these days to tell you that this wait relates in some way to a unique index (or some associated referential integrity) or an ITL wait. (Inevitably there are a couple of other less frequently occurring and less well documented reasons, such as waits for tablespaces to change state but I’m going to ignore those for now). The question is, how do I tell whether this example is related to uniqueness (indexing) or ITLs ? For recent versions of Oracle the answer is in the rest of the trace file which now hold the recent wait history for the session that dumped the trace file.

Reading down my trace file, past the line which says “Information for THIS session”, I eventually get to this:

    Current Wait Stack:
     0: waiting for 'enq: TX - allocate ITL entry'
        name|mode=0x54580004, usn<<16 | slot=0xa001f, sequence=0x8bc
        wait_id=80 seq_num=81 snap_id=1

So it didn’t take me long to find out I had an ITL problem (which should be a pretty rare occurrence in newer versions of Oracle); but there’s more:


    There is at least one session blocking this session.
      Dumping 1 direct blocker(s):
        inst: 1, sid: 9, ser: 40192
      Dumping final blocker:
        inst: 1, sid: 9, ser: 40192
    There are 2 sessions blocked by this session.
    Dumping one waiter:
      inst: 1, sid: 357, ser: 7531
      wait event: 'enq: TX - allocate ITL entry'


    Session Wait History:
        elapsed time of 0.000035 sec since current wait
     0: waited for 'enq: TX - allocate ITL entry'
        name|mode=0x54580004, usn&lt;&lt;16 | slot=0x5000c, sequence=0xa39
        wait_id=79 seq_num=80 snap_id=1
        wait times: snap=5.002987 sec, exc=5.002987 sec, total=5.002987 sec
        wait times: max=5.000000 sec
        wait counts: calls=2 os=2
        occurred after 0.000047 sec of elapsed time
     1: waited for 'enq: TX - allocate ITL entry'
        name|mode=0x54580004, usn&lt;&lt;16 | slot=0xa001f, sequence=0x8bc
        wait_id=78 seq_num=79 snap_id=1
        wait times: snap=1 min 4 sec, exc=1 min 4 sec, total=1 min 4 sec
        wait times: max=1 min 4 sec
        wait counts: calls=22 os=22
        occurred after 0.000032 sec of elapsed time

     8: waited for 'enq: TX - allocate ITL entry'
        name|mode=0x54580004, usn&lt;&lt;16 | slot=0x5000c, sequence=0xa39
        wait_id=71 seq_num=72 snap_id=1
        wait times: snap=5.001902 sec, exc=5.001902 sec, total=5.001902 sec
        wait times: max=5.000000 sec
        wait counts: calls=2 os=2
        occurred after 0.000042 sec of elapsed time
     9: waited for 'enq: TX - allocate ITL entry'
        name|mode=0x54580004, usn&lt;&lt;16 | slot=0xa001f, sequence=0x8bc
        wait_id=70 seq_num=71 snap_id=1
        wait times: snap=4.005342 sec, exc=4.005342 sec, total=4.005342 sec
        wait times: max=4.000000 sec
        wait counts: calls=2 os=2
        occurred after 0.000031 sec of elapsed time


    Sampled Session History of session 249 serial 3931

    The history is displayed in reverse chronological order.

    sample interval: 1 sec, max history 120 sec
      [9 samples,                                          11:14:50 - 11:14:58]
        waited for 'enq: TX - allocate ITL entry', seq_num: 81
          p1: 'name|mode'=0x54580004
          p2: 'usn&lt;= 8 sec (still in wait)
      [5 samples,                                          11:14:45 - 11:14:49]
        waited for 'enq: TX - allocate ITL entry', seq_num: 80
          p1: 'name|mode'=0x54580004
          p2: 'usn&lt;&lt;16 | slot'=0x5000c
          p3: 'sequence'=0xa39
          time_waited: 5.002987 sec (sample interval: 4 sec)

The little report that follows the initial wait state shows that the situation was a little messy – session 9 was my first and last blocker, but there was another session tangled up in the chain of waits, session 357.

Following this there’s a set of entries from my v$session_wait_history - and if you look carefully at the slot and sequence that appears on the second line of each wait you’ll notice that my waits have been alternating between TWO other sessions/transactions before I finally crashed.

Finally there’s a set of entries for my session extracted from v$active_session_history. (Question: I’m only allowed to query v$active_session_history if I’ve licensed the Diagnostic Pack – so should I shut my eyes when I get to this part of the trace file ;) This breakdown also shows my session alternating between waits on the two different blockers, giving me a pretty good post-event breakdown of what was going on around the time of the deadlock.

MySQL: Troubleshooting an Instance for Beginners


So as you may know, my new position involves the MySQL world, so I’m in the task of picking up the language and whereabouts of this DBMS, and my teamate Alkin Tezuysal (@ask_dba on Twitter) has a very cool break and fix lab which you should check out if you are going to Percona Live London 2014, he will be running this lab, so be sure to don’t miss out.

So the first thing I tried was to bring up the service, but to my surprise, the MySQL user didn’t exist. So the first thing I did was create the user.

Note: Whenever you see “…”, it is to shorten the output.

[user-lab@ip-10-10-10-1 ~]$ service mysqld start
touch: cannot touch ‘/var/log/mysqld.log’: Permission denied
chown: invalid user: ‘mysql:mysql’
chmod: changing permissions of ‘/var/log/mysqld.log’: Operation not permitted
mkdir: cannot create directory ‘/var/lib/msql’: Permission denied
[user-lab@ip-10-10-10-1 ~]$ id mysql
id: mysql: no such user
[user-lab@ip-10-10-10-1 ~]$ sudo useradd mysql

So now that the user exists, I try to bring it up and we are back at square one as the initial configuration variable in the .cnf file is incorrect. But there is another problem, as there is more than one .cnf file.

[user-lab@ip-10-10-10-1 ~]$ sudo su -
Last login: Thu Jul 31 11:37:21 UTC 2014 on pts/0
Last failed login: Tue Oct 14 05:45:47 UTC 2014 from on ssh:notty
There were 1269 failed login attempts since the last successful login.
[root@ip-10-10-10-1 ~]# service mysqld start
Initializing MySQL database: Installing MySQL system tables...
141014 17:05:46 [ERROR] /usr/libexec/mysqld: unknown variable 'tmpd1r=/var/tmp'
141014 17:05:46 [ERROR] Aborting

141014 17:05:46 [Note] /usr/libexec/mysqld: Shutdown complete

Installation of system tables failed! Examine the logs in
/var/lib/msql for more information.



In the Oracle world, it is easier to troubleshoot. Here in the MySQL world, the best way to see which .cnf file is being used, we do it with an strace command.

[root@ip-10-10-10-1 ~]# strace -e trace=open,stat /usr/libexec/mysqld
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libpthread.so.0", O_RDONLY|O_CLOEXEC) = 3
open("/lib64/libaio.so.1", O_RDONLY|O_CLOEXEC) = 3
stat("/etc/my.cnf", {st_mode=S_IFREG|0644, st_size=255, ...}) = 0
open("/etc/my.cnf", O_RDONLY)           = 3
stat("/etc/mysql/my.cnf", 0x7fffe4b38120) = -1 ENOENT (No such file or directory)
stat("/usr/etc/my.cnf", {st_mode=S_IFREG|0644, st_size=25, ...}) = 0
open("/usr/etc/my.cnf", O_RDONLY)       = 3
stat("/root/.my.cnf", {st_mode=S_IFREG|0644, st_size=33, ...}) = 0
open("/root/.my.cnf", O_RDONLY)         = 3
141014 17:12:05 [ERROR] Fatal error: Please read "Security" section of the manual to find out how to run mysqld as root!

So now I can see that the /usr/etc/my.cnf is the one with the incorrect wording variable, so we modify it to have it the correct one.

[root@ip-10-10-10-1 ~]# cat /usr/etc/my.cnf
[root@ip-10-10-10-1 ~]# sed -i -e 's/tmpd1r/tmpdir/' /usr/etc/my.cnf
[root@ip-10-10-10-1 ~]# cat /usr/etc/my.cnf

Another try, but again the same result — but even worse this time, as there is no output. After digging around, I found that the place to look is the /var/log/mysqld.log and the problem was that some libraries belonged to root user, instead of the MySQL user.

[root@ip-10-10-10-1 ~]# service mysqld start
MySQL Daemon failed to start.
Starting mysqld:                                           [FAILED]
[root@ip-10-10-10-1 ~]# cat /var/log/mysqld.log
141014 17:16:33 mysqld_safe Starting mysqld daemon with databases from /var/lib/msql
141014 17:16:33 [Note] Plugin 'FEDERATED' is disabled.
/usr/libexec/mysqld: Table 'mysql.plugin' doesn't exist
141014 17:16:33 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
141014 17:16:33 InnoDB: The InnoDB memory heap is disabled
141014 17:16:33 InnoDB: Mutexes and rw_locks use GCC atomic builtins
141014 17:16:33 InnoDB: Compressed tables use zlib 1.2.7
141014 17:16:33 InnoDB: Using Linux native AIO
/usr/libexec/mysqld: Can't create/write to file '/var/tmp/ib1rikjr' (Errcode: 13)
141014 17:16:33  InnoDB: Error: unable to create temporary file; errno: 13
141014 17:16:33 [ERROR] Plugin 'InnoDB' init function returned error.
141014 17:16:33 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
141014 17:16:33 [ERROR] Unknown/unsupported storage engine: InnoDB
141014 17:16:33 [ERROR] Aborting

141014 17:16:33 [Note] /usr/libexec/mysqld: Shutdown complete
[root@ip-10-10-10-1 ~]# perror 13
Error code 13: Permission denied
[root@ip-10-10-10-1 ~]# ls -l /var/lib/mysql/mysql/plugin.*
-rw-rw---- 1 root root 8586 Mar 13  2014 /var/lib/mysql/mysql/plugin.frm
-rw-rw---- 1 root root    0 Mar 13  2014 /var/lib/mysql/mysql/plugin.MYD
-rw-rw---- 1 root root 1024 Mar 13  2014 /var/lib/mysql/mysql/plugin.MYI
[root@ip-10-10-10-1 ~]# chown -R mysql:mysql /var/lib/mysql/mysql/

So I think, yey, I’m set and it will come up! I give it one more shot and, you guessed it, same result and different error :( This time around the problem seemed to be that the memory assigned is incorrect and we don’t have enough on the machine, so we change it.

[root@ip-10-10-10-1 ~]# service mysqld start
MySQL Daemon failed to start.
Starting mysqld:                                           [FAILED]
141014 17:36:15 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
141014 17:36:15 [Note] Plugin 'FEDERATED' is disabled.
141014 17:36:15 InnoDB: The InnoDB memory heap is disabled
141014 17:36:15 InnoDB: Mutexes and rw_locks use GCC atomic builtins
141014 17:36:15 InnoDB: Compressed tables use zlib 1.2.7
141014 17:36:15 InnoDB: Using Linux native AIO
141014 17:36:15 InnoDB: Initializing buffer pool, size = 100.0G
InnoDB: mmap(109890764800 bytes) failed; errno 12
141014 17:36:15 InnoDB: Completed initialization of buffer pool
141014 17:36:15 InnoDB: Fatal error: cannot allocate memory for the buffer pool
141014 17:36:15 [ERROR] Plugin 'InnoDB' init function returned error.
141014 17:36:15 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
141014 17:36:15 [ERROR] Unknown/unsupported storage engine: InnoDB
141014 17:36:15 [ERROR] Aborting
[root@ip-10-10-10-1 ~]# grep 100 /etc/my.cnf
[root@ip-10-10-10-1 ~]# sed -i -e 's/100G/256M/' /etc/my.cnf
[root@ip-10-10-10-1 ~]# grep innodb_buffer_pool_size /etc/my.cnf

Now, I’m not even expecting this instance to come up, and I am correct — It seems a filename has incorrect permissions.

[root@ip-10-10-10-1 ~]# service mysqld start
MySQL Daemon failed to start.
Starting mysqld:                                           [FAILED]
root@ip-10-10-10-1 ~]# cat /var/log/mysqld.log
141014 17:37:15 InnoDB: Initializing buffer pool, size = 256.0M
141014 17:37:15 InnoDB: Completed initialization of buffer pool
141014 17:37:15  InnoDB: Operating system error number 13 in a file operation.
InnoDB: The error means mysqld does not have the access rights to
InnoDB: the directory.
InnoDB: File name ./ibdata1
InnoDB: File operation call: 'open'.
InnoDB: Cannot continue operation.
141014 17:37:15 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
[root@ip-10-10-10-1 ~]# ls -l /var/lib/mysql/ibdata1
-rw-rw---- 1 27 27 18874368 Mar 13  2014 /var/lib/mysql/ibdata1
[root@ip-10-10-10-1 ~]# ls -l /var/lib/mysql
total 83980
-rw-rw---- 1    27    27 18874368 Mar 13  2014 ibdata1
-rw-rw---- 1    27    27 33554432 Mar 13  2014 ib_logfile0
-rw-rw---- 1    27    27 33554432 Mar 13  2014 ib_logfile1
drwx------ 2 mysql mysql     4096 Mar 13  2014 mysql
drwx------ 2 root  root      4096 Mar 13  2014 performance_schema
drwx------ 2 root  root      4096 Mar 13  2014 test
[root@ip-10-10-10-1 ~]# chown -R mysql:mysql /var/lib/mysql

Now, I wasn’t even expecting the service to come up, but to my surprise it came up!

[root@ip-10-10-10-1 ~]# service mysqld start
Starting mysqld:                                           [  OK  ]

So now, what I wanted to do was connect and start working, but again, there was another error! I saw that it was related to the socket file mysql.sock, so I changed it to the correct value in our .cnf file

[root@ip-10-10-10-1 ~]# mysql
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
[root@ip-10-10-10-1 ~]# service mysql status
mysql: unrecognized service
[root@ip-10-10-10-1 ~]# service mysqld status
mysqld (pid  5666) is running...
[root@ip-10-10-10-1 ~]# ls -l /tmp/mysql.sock
ls: cannot access /tmp/mysql.sock: No such file or directory
[root@ip-10-10-10-1 ~]# grep socket /var/log/mysqld.log | tail -n 1
Version: '5.5.36'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)
[root@ip-10-10-10-1 ~]# lsof -n | grep mysqld | grep unix
mysqld    5666    mysql   12u     unix 0xffff880066fbea40       0t0     981919 /var/lib/mysql/mysql.sock
[root@ip-10-10-10-1 ~]# cat /etc/my.cnf


[root@ip-10-10-10-1 ~]# vi /etc/my.cnf
[root@ip-10-10-10-1 ~]# service mysqld restart
Stopping mysqld:                                           [  OK  ]
Starting mysqld:                                           [  OK  ]
[root@ip-10-10-10-1 ~]# mysql -p
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 3
Server version: 5.5.36 MySQL Community Server (GPL)

Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.



As you can see, there are different ways to troubleshoot the startup of a MySQL instance, so hope this helps you out in your journey when you are starting to use this DBMS and also if you know of another way, let me know in the comment section below.

Please note that this blog post was originally published on my personal blog.

High System Time on Windows Running in a VM

Recently I’ve seen an issue with CPU usage on a server running Windows 2003 Server in a VMware. This is a small Virtual Machine with just 2 cores allocated (which are possibly mapped to “threads” on a host level but I don’t know the details). For some reason very high System CPU time was reported in a Statspack report.

Here is how it looks like in a 1 hour Statspack report:

Host CPU  (CPUs: 2  Cores: 2  Sockets: 0)
~~~~~~~~              Load Average
                      Begin     End      User  System    Idle     WIO     WCPU
                    ------- -------   ------- ------- ------- ------- --------
                                         3.04    8.77   88.19

Note that the System CPU time is more than twice the User CPU time on average (remember the averages could be misleading sometimes). This caught my attention as usual. Although the average CPU used is not really high, this server is somewhat sluggish even for a one hop RDP connection over the VPN.
I have tried to find out some details about what is going on. Since I’m not a Windows guy, I did not know what kind of tools could be used to track places in the OS kernel that take too much time. On Linux this is relatively easy starting with strace/pstack/perf utilities and other command line tools. Windows is different.

I’ve started to search for the options available, and the first thing to find is of course Perfmon, which allows to track and visualize different OS related metrics (counters in Perfmon terminology) on a system, CPU, or process levels. I’ve used it to capture a few key metrics such as User Time, System Time (which is apparently called Privileged Time on Windows), Queue length and Context Switches per second. From a graph of the CPU usage the issue is visible:

Here the white line is representing Privileged (or System) CPU, and yellow line is Total CPU. It’s clear that almost all used CPU is accounted to the Privileged part.
By the way it is actually very easy to see a similar picture in a standard Performance tab of Task Manager, you just need to select View then Show Kernel Times and Privileged part of the used CPU will be displayed in red.

After that I have searched for details of where to find why Privileged CPU time is so high. A good article that I have found is here. Although it is relatively old, it fits my case as the OS is a 32 bit Windows 2003 Server. The article points to a tool called KernRates. This is a command line tool with a very easy interface: you run it, wait for some time and stop it with Ctrl-C. After that the tool prints the profile of system calls by module. Here is what I’ve seen:

C:Program FilesKrViewKernrates>Kernrate_i386_Win2000.exe
<         KERNRATE LOG           >
Date: 2014/09/03   Time: 12:39:21
Machine Name: ***
Number of Processors: 2
Physical Memory: 3072 MB
Pagefile Total: 6996 MB
Virtual Total: 2047 MB
PageFile1: ??C:pagefile.sys, 4080MB
OS Version: 5.2 Build 3790 Service-Pack: 2.0

Kernrate User-Specified Command Line:

Kernel Profile (PID = 0): Source= Time,
Using Kernrate Default Rate of 25000 events/hit
Starting to collect profile data

***> Press ctrl-c to finish collecting profile data
===> Finished Collecting Data, Starting to Process Results

------------Overall Summary:--------------

P0     K 0:00:03.703 ( 8.7%)  U 0:00:00.734 ( 1.7%)  I 0:00:38.046 (89.6%)  DPC 0:00:00.031 ( 0.1%)  Interrupt 0:00:00.406 ( 1.0%)
       Interrupts= 22840, Interrupt Rate= 538/sec.

P1     K 0:00:02.343 ( 5.5%)  U 0:00:00.656 ( 1.5%)  I 0:00:39.484 (92.9%)  DPC 0:00:00.000 ( 0.0%)  Interrupt 0:00:00.281 ( 0.7%)
       Interrupts= 20017, Interrupt Rate= 471/sec.

TOTAL  K 0:00:06.046 ( 7.1%)  U 0:00:01.390 ( 1.6%)  I 0:01:17.531 (91.2%)  DPC 0:00:00.031 ( 0.0%)  Interrupt 0:00:00.687 ( 0.8%)
       Total Interrupts= 42857, Total Interrupt Rate= 1009/sec.

Total Profile Time = 42484 msec

                                       BytesStart          BytesStop         BytesDiff.
    Available Physical Memory   ,       372678656,       363945984,        -8732672
    Available Pagefile(s)       ,      3285475328,      3281805312,        -3670016
    Available Virtual           ,      2131267584,      2130219008,        -1048576
    Available Extended Virtual  ,               0,               0,               0

                                  Total      Avg. Rate
    Context Switches     ,        61247,         1442/sec.
    System Calls         ,       305201,         7184/sec.
    Page Faults          ,        58440,         1376/sec.
    I/O Read Operations  ,         3496,         82/sec.
    I/O Write Operations ,         2637,         62/sec.
    I/O Other Operations ,        29567,         696/sec.
    I/O Read Bytes       ,     59649045,         17062/ I/O
    I/O Write Bytes      ,      2653894,         1006/ I/O
    I/O Other Bytes      ,    624604436,         21125/ I/O


Results for Kernel Mode:

OutputResults: KernelModuleCount = 109
Percentage in the following table is based on the Total Hits for the Kernel

Time   33235 hits, 25000 events per hit --------
 Module                                Hits   msec  %Total  Events/Sec
intelppm                              30310      42486    91 %    17835286
ntkrnlpa                               2337      42486     7 %     1375158
hal                                     271      42486     0 %      159464
mfehidk01                                74      42486     0 %       43543
Ntfs                                     58      42486     0 %       34128
mfehidk                                  52      42486     0 %       30598
mfeapfk                                  47      42486     0 %       27656
mfeavfk01                                17      42486     0 %       10003
tcpip                                    13      42486     0 %        7649
win32k                                   12      42486     0 %        7061
mfeavfk                                  10      42486     0 %        5884
fltmgr                                    6      42486     0 %        3530
CLASSPNP                                  3      42486     0 %        1765
SCSIPORT                                  3      42486     0 %        1765
RDPDD                                     2      42486     0 %        1176
afd                                       2      42486     0 %        1176
Npfs                                      2      42486     0 %        1176
NDIS                                      2      42486     0 %        1176
symmpi                                    2      42486     0 %        1176
TDTCP                                     1      42486     0 %         588
rdbss                                     1      42486     0 %         588
netbt                                     1      42486     0 %         588
mfetdi2k                                  1      42486     0 %         588
ipsec                                     1      42486     0 %         588
termdd                                    1      42486     0 %         588
TDI                                       1      42486     0 %         588
vmxnet                                    1      42486     0 %         588
KSecDD                                    1      42486     0 %         588
atapi                                     1      42486     0 %         588
volsnap                                   1      42486     0 %         588
ftdisk                                    1      42486     0 %         588

================================= END OF RUN ==================================
============================== NORMAL END OF RUN ==============================

The default output contains some basic information about the system, CPU usage, memory and context switching. The kernel modules profile is the most interesting part here. It lists some modules with internal names and the profile data: number of times the module was running during a sample; this is the most important information. So in mycase intelppm was the top running kernel module.
I’ve searched again, now for intelppm, and found a few posts describing similar symptoms. Apparently intelppm is a CPU driver. Sometimes it causes issues such as BSOD or high CPU usage, especially if it is a cloned VM and CPU architecture changes in between. It was not clear if this something which can be disabled, but there were posts suggesting that stopping this service (which is not listed in Services) helped a few people. So I have recommended the client to try to disable this driver with the following commands:

sc config intelppm start=disabled
sc stop intelppm

Theoretically this should disable Intel CPU driver and Windows should try to use another if it is available. When we tried to run it, the 2nd command (to stop the driver) failed with the following message:

[SC] ControlService FAILED 1052:

The requested control is not valid for this service.

So it is not possible to stop the driver online, and Windows restart is necessary.
We did a restart of the VM. After that, the situation was a bit different: the CPU time was somewhat reduced; but the privileged part was still quite high with hal (Hardware Abstraction Layer) on top instead of intelppm:

Time   95865 hits, 25000 events per hit --------
 Module                                Hits   msec  %Total  Events/Sec
hal                                   82669     125183    86 %    16509629
ntkrnlpa                              11788     125183    12 %     2354153
mfehidk                                 474     125183     0 %       94661
mfeapfk                                 224     125183     0 %       44734
Ntfs                                    207     125183     0 %       41339
vmmemctl                                155     125183     0 %       30954
mfeavfk                                  92     125183     0 %       18373
tcpip                                    85     125183     0 %       16975
win32k                                   54     125183     0 %       10784
fltmgr                                   14     125183     0 %        2795
mfetdi2k                                 11     125183     0 %        2196
TDI                                      10     125183     0 %        1997
RDPWD                                     9     125183     0 %        1797
PartMgr                                   9     125183     0 %        1797
KSecDD                                    7     125183     0 %        1397
SCSIPORT                                  7     125183     0 %        1397
afd                                       6     125183     0 %        1198
symmpi                                    6     125183     0 %        1198
RDPDD                                     5     125183     0 %         998
ipsec                                     5     125183     0 %         998
NDIS                                      5     125183     0 %         998
CLASSPNP                                  5     125183     0 %         998
mfebopk                                   4     125183     0 %         798
Npfs                                      3     125183     0 %         599
termdd                                    3     125183     0 %         599
vmxnet                                    2     125183     0 %         399
volsnap                                   2     125183     0 %         399
ndisuio                                   1     125183     0 %         199
mrxsmb                                    1     125183     0 %         199
rdbss                                     1     125183     0 %         199
atapi                                     1     125183     0 %         199

But in terms of Oracle performance everything changed: everything now run much faster, including simple queries in the SQL*Plus. A particular query started to run 3 times faster on average:

-- stats before
SQL> @sqlstats cp9jr3hp1jupk
                  Elapsed     Ela/exec                            User IO     Rows per   Versi           Share  Avg hard
       Execs            s            s      CPU, s    Gets/exec         s         exec     ons   Loads Mem, MB parse, ms PX Exec
------------ ------------ ------------ ----------- ------------ --------- ------------ ------- ------- ------- --------- -------
         135      170.093        1.260      155.31          835      4.29            1       1       1     .02    350.99       0

-- stats after
SQL> @sqlstats cp9jr3hp1jupk
                  Elapsed     Ela/exec                            User IO     Rows per   Versi           Share  Avg hard
       Execs            s            s      CPU, s    Gets/exec         s         exec     ons   Loads Mem, MB parse, ms PX Exec
------------ ------------ ------------ ----------- ------------ --------- ------------ ------- ------- ------- --------- -------
         604      212.151         .351      154.75        1,013     31.79            1       1       1     .02      8.34       0

It looks like the change helped, but there is no sign that it helped on the OS level. This makes me think that such an improvement in performance may be attributed to something else, such as OS, hypervisor or combination of them and Oracle. In any case, high system time is not good and it usually indicates that something is wrong.

Pythian at PASS 2014

Join us in Seattle for SQL PASS Summit 2014—planned by and for the Microsoft SQL Server community—where some of our very own will be presenting. You’ll notice that PASS has a funny way of scheduling speaking sessions, so both of our experts are presenting at the same time.

Also be sure to visit us at booth #226 for a chance to win a Sonos Play: 1. The Exhibit Hall will be open all day Wednesday, Thursday, and Friday morning.


Edwin Sarmiento Configuring SharePoint 2013 as a Business Intelligence Platform by Edwin Sarmiento
Wednesday November 5 — 1:30-2:45 PM
Room 608

Edwin Sarmiento, a Microsoft MVP, Certified Master , and Principal Consultant in Pythian’s Advanced Technology Group, will be presenting a session called Configuring SharePoint 2013 as a Business Intelligence Platform.

With SharePoint as the business collaboration platform of choice for the enterprise, it has become an important piece in the overall Microsoft Business Intelligence (BI) stack. However, deploying and configuring it to support Microsoft BI solutions has become a cumbersome task for IT experts.

In this session, Edwin will demonstrate what it takes to successfully architect and design SharePoint 2013 as a BI platform. He will cover multiple configuration scenarios and the different technologies necessary to build the infrastructure. Attendees will walk away with the confidence to run Power Pivot, Power View, and Reporting Services in their SharePoint 2013 farms.


Warner ChavesThe Use Cases for In-Memory OLTP by Warner Chaves
Wednesday November 5 — 1:30-2:45 PM
Room 3AB

Warner Chaves, a Microsoft Certified Master and Principal Consultant in Pythian’s SQL Server practice, will be presenting a session called The Use Cases for In-Memory OLTP.

In this session, he will do a short introduction to the In-Memory OLTP feature before diving straight into the use cases where the new lockless/latchless concurrency control and native compilation really shine.

Demos will cover PAGELATCH contention (or lack thereof), use of non-persistent in-memory tables for ETL, and in-memory tables as “shock absorbers” for high throughput environments.

For each case, Warner will do a comparison of “classic” versus in-memory, what gains the audience can expect, and what patterns will yield the biggest benefits.


BONUS MATERIAL! Since its release back in April, SQL Server experts across the globe are becoming familiar with the top features in Microsoft SQL Server 2014—the In-Memory OLTP engine, the AlwaysOn enhancements, and more. Warner couldn’t help but notice that there were a few features that weren’t getting the same attention and filmed a video series sharing the most underrated features of SQL Server 2014. We’ll be publishing that series during PASS, so follow @Pythian on Twitter to receive our updates.


Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Microsoft SQL Server expertise.