Exadata X5 – A Practical Point of View of the New Hardware and Licensing

Oracle recently announced its latest iteration of Exadata – X5-2. It includes a refresh of the hardware to the most recent Xeon® E5-2699 v3 CPUs. These new CPUs boost the total cores count in a full rack to 288. This is higher than the current 8 socket “big machine” version X4-8, which has only 240 cores.

But the most exciting part is the all flash version of Exadata. In the previous generation – X4 – Oracle had to switch from 15K drives to 10K drives in order to boost capacity from 600 GB to 1200 GB per hard drive to keep disk space higher than flash cache size. At that time of X4 announcements, we were already wondering why Oracle was still offering high-speed disks and not switching to all flash, and now we know why. Because that type of high-performance flash wasn’t quite ready.

Maintaining high IO rates over long periods of times needed some changes to the ILOM in order to maintain cooling fans speed based on many individual temperature sensors inside the flash cards (details). Removing the SAS controller and using the new NVMe connectivity resulted in much higher bandwidth per hard drive – 3.2 GBytes/sec vs. the old 1.2 GBytes/sec SAS.

With temperature and bandwidth sorted out, we now have a super-high performance option (EF – Extreme Flash) for Exadata which delivers the stunning 263 GB/sec uncompressed scan speed in a full rack. The difference in performance between the High Capacity and High Performance EF flash option is now much higher. The high-performance option in Exadata X5 is now viable. In Exadata X4 it made so little difference, that it was pointless.

x4 vs x5

The one thing I wonder with the X5 announcement is why the X5-2 storage server still uses the very old and quite outdated 8 core CPUs. I’ve seen many cases where a Smart Scan on an HCC table is CPU bound on the storage server even when reading from spinning disk. I am going to guess that there’s some old CPU inventory to cleanup. But that may not end up being such a problem (see “all columnar” flash cache feature).

But above all, the most important change was the incremental licensing option. With 36 cores per server, even the 1/8th rack configuration was in the multi-million dollars in licenses, and in many cases was too much for the problem in hand.

The new smallest configuration is:

  • 1/8th rack, with 2 compute nodes
  • 8 cores enabled per compute node (16 total)
  • 256 GB RAM per node (upgradable to 768 GB per node)
  • 3 storage servers with only half the cores, disks and flash enabled

Then you can license additional cores as you need them, 2 cores at a time. Similar to how ODA licensing option worked. You cannot reduce licensed cores.

The licensing rules changes go even further. Now you can mix & match compute and storage servers to create even more extreme options. Some non-standard examples:

  • Extreme Memory – more compute nodes with max RAM, reduced licensed cores
  • Extreme Storage – replace compute node with storage nodes, reduced licensed cores

x5 custom
Link to video

In conclusion, Oracle Exadata X5 configuration options and the changes it brings to licensing allows an architect to craft a system that will meet any need and allow for easy, small step increments in the future, potentially without any hardware changes.

There are many more exciting changes in Oracle 12c, Exadata X5 and the new storage server software which I may cover in the future as I explore them in detail.

Log Buffer #411, A Carnival of the Vanities for DBAs

This Log Buffer Edition brings you some blog posts from Oracle, SQL Server and MySQL.

Oracle:

Suppose you have a global zone with multiple zpools that you would like to convert into a native zone.

The digital revolution is creating abundance in almost every industry—turning spare bedrooms into hotel rooms, low-occupancy commuter vehicles into taxi services, and free time into freelance time

Every time I attend a conference, the Twitter traffic about said conference is obviously higher.  It starts a couple weeks or even months before, builds steadily as the conference approaches, and then hits a crescendo during the conference.

Calling All WebLogic Users: Please Help Us Improve WebLogic Documentation!

Top Two Cloud Security Concerns: Data Breaches and Data Loss

SQL Server:

This article describes a way to identify the user who truncated the table & how you can recover the data.

When SQL Server 2014 was released, it included Hekaton, Microsoft’s much talked about memory-optimized engine that brings In-Memory OLTP into play.

Learn how you can easily spread your backup across multiple files.

Daniel Calbimonte has written a code comparison for MariaDB vs. SQL Server as it pertains to how to comment, how to create functions and procedures with parameters, how to store query results in a text file, how to show the top n rows in a query, how to use loops, and more.

The article show a simple way we managed to schedule index rebuild and reorg for an SQL instance with 106 databases used by one application using a Scheduled job.

MySQL:

How to setup a PXC cluster with GTIDs (and have async slaves replicating from it!)

vCloud Air and business-critical MySQL

MySQL Dumping and Reloading the InnoDB Buffer Pool

How to benchmark MongoDB

MySQL Server on SUSE 12

SQL Server 2014 Cumulative Update 6

Hello everyone,

Just a quick note to let you know that this week, while most of North America was enjoying a break, Microsoft released the 6th cumulative update for SQL Server 2014. This update contains fixes for 64 different issues, distributed as follows:

SQL 2014 Cumulative Update 6

As the name implies, this is a cumulative update, that means it is not necessary to install the previous 5 in case you don’t have them. Please remember to test thoroughly any update before applying to production.

The cumulative update and the full release notes can be found here: https://support.microsoft.com/kb/3031047/en-us?wa=wsignin1.0

 

 

DBMS_INMEMORY_ADVISOR

When you follow the Oracle in-memory / optimizer team, then you have probably seen this…

Oracle Weblogic SAML2 Authorization

Grammatically the title has no much sense, but those were the keywords that I used to type a couple of years ago when I started to work in the integration of our JEE applications into our SSO system.

Who enjoys the feather display of a male peacock?



Who appreciates the display of feathers by a male peacock?

Female peacocks seem to get a kick out of them. They seem to play a role in mating rituals.

Who else? Why, humans, of course!

We know that humans greatly appreciate those displays, because of the aaahing and ooohing that goes on when we see them. We like those colors. We like the irridescence. We like the shapes and patterns.

If one were to speculate on why a female peacock gets all worked up about a particular male's feather display, we would inevitably hear about instinctual responses, hard-wiring, genetic determinism, and so on.

And if one were to speculate on why a human goes into raptures, we would then experience a major shift in explanation. 

Time to talk about anything but a physiological, hard-wired sort of response.

No, for humans, the attraction has to do with our big brains, our ability to create and appreciate "art". And that is most definitely not something other animals do, right?

Oh, sure, right. Like these instinctive, hard-wired bowerbird mating nests:


That clearly has nothing to do with an aesthetic sense or "art". Just instinct.

Why? Because we humans say so. We just assert this "fact."

Most convenient, eh?

Database Provisioning in Minutes: Using Enterprise Manager 12c DBaaS Snap Clone and EMC Storage

A little while back I posted a blog on the official Oracle Enterprise Manager blog on using Enterprise Manager 12c Snap Clone with EMC storage. Last week, I also presented on exactly the same topic at the RMOUG Training Days in Denver, Colorado, and there was quite a bit of interest in the subject.

Given that level of interest, I thought I’d let you know about another opportunity to hear all about it even if you weren’t able to attend the RMOUG conference. You may know that product management does a lot of webcasts around different areas of functionality with EM12c. Well, the very next one is on Snap Clone with EMC storage, in this case being co-presented by Oracle and EMC. You can click on the “Register” button on this link to register your attendance and get all the details as far as dial in numbers and so on. It’s on this Wednesday, February 25 at 8:00 A.M. PT ( 11:00 A.M. ET) – those are in US time zones BTW. Take advantage of this opportunity, and while you’re looking at the webcast page, check out some of the other fantastic opportunities that are coming up to hear about some of the really cool things you can do with EM12c!

APEX 5.0: the way to use Theme Roller

Once you have your new application created using the Universal Theme it's time to customise your theme with Theme Roller.

Run your application and click the Theme Roller link in the APEX Developer Toolbar:


Theme Roller will open. I won't go in every section, but want to highlight the most important sections in Theme Roller in this post:
  1. Style: there're two styles that come with APEX 5.0: Blue and Gray. You can start from one of those and see how your application changes color. It will set predefined colors for the different parts of the templates.

  2. Color Wheel: when you want to quickly change your colors, an easy way to see different options is by using the Color Wheel. You've two modes: monochroom (2 points) and dual color (3 points - see screenshot). By changing one point it will map the other point to a complementary color. Next you can move the third point to play more with those colors.

  3. Global Colors: if the Color Wheel is not specific enough for what you need, you can start by customising the Global Colors. Those are the main colors of the Universal Theme and used to drive the specific components. You can still customise the different components e.g. the header by clicking further down in the list (see next screenshot).

  4. Containers etc. will allow you to define the specific components. A check icon will say it's the standard color coming with the selected style. An "x" means the color was changed and an "!" means the contrast is probably not great.
Original with styleAfter changing colors


This is just awesome... but what if you don't like the changes you did?

Luckily you can Reset either your entire style or you can refresh the specific section by clicking the refresh icon. There's also an undo and redo button. But that is not all... for power users when you hit "ALT" when hovering a color you can just reset that color! (only that color will get a refresh icon in it and clicking it will reset it)

Note that all changes you're making are locally stored on your computer in your browsers cache (HTML5 local storage), so you don't effect other users by playing with the different colors.

Finally when you are done editing your color scheme you can hit the Save As button to save all colors to a new Style. When you close Theme Roller the style will go back how it was.
The final step to apply the new style so everybody sees that version, is to go to User Interface Details (Shared Components) and set the style to the new one.

Note that this blog post is written based on APEX 5.0 EA3, in the final version of APEX 5.0 (or 5.1) you might apply the new style from Theme Roller directly.

Want to know more about Theme Roller and the Universal Theme - we're doing an APEX 5.0 UI Training May 12th in the Netherlands.

RMOUG Training Days 2015

Yet again, it was a fantastic time at the RMOUG Training Days 2015 conference, as it has been every other year I have attended it. That is in no small measure due to the incredible work of the organizing committee, and in particular the Training Days Director, my colleague Kellyn Pot’Vin Gorman of dbakevlar.com fame. For me personally, the travel to get to Denver Colorado was somewhat more daunting than in previous years (see my earlier post for why!), but once I got there it all went relatively smoothly. I flew in on the Sunday before the conference started to allow me to get over any problems from the trip, but as it turned out everything was just fine.

I had three abstracts accepted for the conference:

The first two were two hour sessions on the first day of the conference (Tuesday), which was dedicated to deep dive sessions. I had originally planned these to be short presentations by me, followed by hands on labs to set up each of the different subjects. That fell through a few weeks before the conference when the hardware I had planned to use was taken away for upgrade, so instead I put together longer presentations with demos. As it turned out, that was a Good Thing (TM) as I had way more attendees (particularly at the PDBaaS presentation) than I would have had hardware for anyway! The third session was a much more traditional presentation – a bunch of slides followed by a shorter demo, and again it was very well attended. Lots of interest from attendees for all three, so from that perspective I was very happy.

Unfortunately I did have some technical issues with my laptop. I had some problems getting the material from my laptop onto the presentation screen, both from PowerPoint and the demo itself, so I’m going to have to spend some time sorting that out after I get home. :(

Having said that, the conference was still a blast. As I always do, I thoroughly enjoyed the interactions with attendees, but this time I also had the added enjoyment of interacting with a bunch of my colleagues from the Strategic Customer Program in the Enterprise Manager product management team – Kellyn Pot’Vin-Gorman, Courtney Llamas, Andrew Bulloch, and Werner de Gruyter. It’s when I interact with people like these guys that I realize just how much I still need to learn about Enterprise Manager as a product, particularly around the high end infrastructure architectures that the Strategic Customer Program folks normally work with.

Once the conference was finished, it was time to head up into the mountains at Breckenridge for some very relaxing R&R time. When I finish here it will be back to Intergalactic Headquarters near San Francisco for a week before I head back home again to be with family again. All in all, a fantastic conference as always, so thanks Kellyn and all the organizing committee!

Automatic: Nice, but Not Necessary

Editor’s note: Here’s the first post from one of our newish team members, Ben. Ben is a usability engineer with a PhD in Cognitive Psychology, and by his own account, he’s also a below average driver. Those two factoids are not necessarily related; I just don’t know what his likes and dislikes are so I’m spit-balling.

Ben applied his research chops to himself and his driving using Automatic (@automatic), a doodad that measures your driving and claims to make you a better driver. So, right up his alley.

Aside from the pure research, I’m interested in this doodad as yet another data collector for the quantified self. As we generate mounds of data through sensors, we should be able to generate personal annual reports, a la Nicholas Felton, that have recommended actions and tangible benefits.

Better living through math.

Anyway, enjoy Ben’s review.

When I first heard about Automatic (@automatic), I was quite excited—some cool new technology that will help me become a better driver. The truth is, I’m actually not a big fan of driving. Which is partly because I know I’m not as good of a driver as I could be, so Automatic was a glimmer of hope that would lead me on the way to improving my skills.

Though I will eagerly adopt automated cars once they’re out and safe, the next best thing is to get better so I no longer mildly dread driving, especially when I’m conveying others. And one issue with trying to improve is knowing what and when you’re doing something wrong, so with that in mind (and for enterprise research purposes), I tried out Automatic.

Automatic is an app for your phone plus a gadget (called the Link) that plugs into your car’s diagnostics port, which together gives you feedback on your driving and provides various ways to look at your trip data.

Automatic Link

The diagnostics port the Link plugs into is the same one that your mechanic uses to see what might be wrong when your check engine light is ominously glaring on your dashboard. Most cars after 1996 have these, but not all data is available for all cars. Mine is a 2004 Honda Civic, which doesn’t put out gas tank level data, meaning that MPG calculations may not be as accurate as they could be. But it still calculates MPG, and it seems to be reasonably accurate. I don’t, however, get the benefit of “time to fuel up” notifications, though I do wonder how much of a difference those notifications make.

The Link has its own accelerometer, so that combined with the data from the port and paired with your phone via Bluetooth, it can tell you about your acceleration, distance driven, your speed, and your location. It can also tell you what your “Check Engine” light means, and send out some messages in the result of a crash.

It gives three points of driving feedback: if you accelerate too quickly, brake too hard, or go over 70 mph. Each driving sin is relayed to you with its own characteristic tones emitted from the Link. It’s a delightful PC speaker, taking you way back to the halcyon DOS days (for those of you who were actually alive at the time). It also lets you know when it links up with your phone, and when it doesn’t successfully connect it outputs a sound much like you just did something regrettable in a mid-’80s Nintendo game.

App screenshot

One of the main motivators for the driving feedback is to save gas—though you can change the top speed alert if you’d like. From their calculations, Automatic says 70 mph is about as fast as you want to go, given the gas-spent/time-it-will-take-to-get-there tradeoff.

Automatic web dashboard

Another cool feature is that it integrates with IFTTT (@ifttt), so you can set it up to do things like: when you get home, turn the lights on (if you have smart lights); or when you leave work, send a text to your spouse; or any other number of things—useful or not!

Is It Worth It?

The big question is, is it worth $99? It’s got a great interface, a sleek little device, and a good number of features, but for me, it hasn’t been that valuable (yet). For those with the check engine light coming up, it could conceivably save a lot of money if you can prevent unnecessary service on your car. Fortunately, my Civic has never shown me the light (knock on wood), though I’ll probably be glad I have something like Automatic when it does.

I had high hopes for the driver feedback, until I saw that it’s actually pretty limited. For the most part, the quick acceleration and braking are things I already avoided, and when it told me I did them, I usually had already realized it. (Or it was a situation out of my control that called for it.) A few times it beeped at me for accelerating where it didn’t feel all that fast, but perhaps it was.

I was hoping the feedback would be more nuanced and could allow me to improve further. The alerts would be great for new drivers, but don’t offer a whole lot of value to more experienced drivers—even those of us who would consider themselves below average in driving skill (putting me in an elite group of 7% of Americans).

The Enterprise Angle

Whether it’s Automatic, or what looks like might be a more promising platform, Mojio (@getmojio), there are a few potentially compelling business reasons to check out car data-port devices.

One of the more obvious ones is to track mileage for work purposes—it gives you nice readouts of all your trips, and allows you to easily keep records. But that’s just making it a little easier for an employee to do their expense reports.

The most intriguing possibility (for me) is for businesses that manage fleets of regularly driven vehicles. An Automatic-like device could conceivably track the efficiency of cars/trucks and drivers, and let a business know if a driver needs better training, or if a vehicle is underperforming or might have some other issues. This could be done through real-time fuel efficiency, or tracking driving behavior, like what Automatic already does: hard braking and rapid acceleration.
If a truck seems to be getting significantly less mpg than it should, they can see if it needs maintenance or if the driver is driving too aggressively. Though trucks probably get regular maintenance, this kind of data may allow for preventive care that could translate to savings.

This kind of tracking could also be interesting for driver training, examining the most efficient or effective drivers and adopting an “Identify, Codify, Modify” approach.

Overall

I’d say this technology has some interesting possibilities, but may not be all that useful yet for most people. It’s fun to have a bunch of data, and to get some gentle reminders on driving practices, but the driver improvement angle from Automatic hasn’t left me feeling like I’m a better driver. It really seems that this kind of technology (though not necessarily Automatic, per se) lends itself more to fleet management, improving things at a larger scale.

Stay tuned for a review of Mojio, which is similar to Automatic, but features a cellular connection and a development platform, and hence more possibilities.Possibly Related Posts: