Critical Analysis of “Critical Analysis Meets Exadata”

 

Kevin Closson put out a post yesterday called Critical Analysis Meets Exadata, linking to two awesome videos. It’s well worth spending the time to watch these, even if (like me) you never get so much as a sniff of Exadata. 🙂

I was lucky enough to be one of several people asked to review these videos before they were released. I’m sure some of the performance gurus on the Oak Table had a lot to say, but of the several comments I fed back to Kevin, I would just like to post a couple here:

  • As a Joe Schmo dba, I almost never get to see what is happening internally in the storage layer (SAN, NAS etc). For the most part the storage is a black box that presents a few LUNs to me. If the storage subsystem and connecting network are capable of pushing enough data to and from my servers, to the point where my RAC node CPUs are the bottleneck, that is awesome. So if I think of the storage grid part of the Exadata configuration like I would judge any other SAN/NAS, then it gets a big gold star because it is good enough to keep my RAC node CPUs/cores, that are ridiculously expensive to license, working at full tilt most of the time.
  • I believe the storage cell licensing is sold on a per disk basis, not per CPU core, so the storage grid being full of idle cores does not mean I’m paying for software licensing on idle cores. If Oracle reduced the total number of CPUs/cores, the licensing costs would be unaffected. If on the other hand, the storage cells could perform a lot more of the CPU intensive load and free up the RAC nodes, then I guess the licensing situation would change, because Oracle wouldn’t want to lose those high-cost licenses from the RAC nodes.

Now Kevin is an architecture guy and I can see how from his perspective this setup sucks, because it does. He’s clearly displayed that. Then again, from a DBA perspective, do I really give a damn about some idle CPUs in the storage layer? For all I know, every other storage system out there could be just as stupid, especially now it’s impossible to buy chips with small numbers of cores. 🙂

Like I said, you should watch the videos because they are great, but don’t be afraid to have a different opinion because you may be judging things by different standards. 🙂

Cheers

Tim…

Author: Tim...

DBA, Developer, Author, Trainer.

22 thoughts on “Critical Analysis of “Critical Analysis Meets Exadata””

  1. “don’t be afraid to have a different opinion because you may be judging things by different standards.”

    …So right! So, very, very right.

    I find that folks at the tactical level (that is not a pejorative) don’t care so much about a 14kW rack with lots of wasted resources because so long as it gets the job done it isn’t their concern. Now if you go a little higher in the food chain where wasting money, electricity, HVAC and floor space are pesky little concerns then the judging standard is sometimes quite different.

    The solution to the problem is to just build a sufficiently large RAC grid and stop wrestling with the imbalances. Doing so would keep the tactical level folks happy (because they still get to toil with RAC) and the performance would be better because the CPUs would always be in the right place.

    By the way, your tactical-minded viewpoint was shared by some folks on the Oaktable as well. As you might imagine I got an entirely different feedback from Mogens and James 🙂

    P.S., You can still get thinly-chopped processors. Google E5-2643. It’s a good one.

  2. I’m not a DBA – well, mostly 😉

    Yes Tim…the storage cells are sold on a per Disk basis…or at least, that’s what the Exadata boys have quoted to us over the past 6 months in our current hardware upgrade project, where we are choosing between Exadata X2-2 and HP DL980 based solutions.

    In theory you’re right that, as a DBA, you could just ignore the fact that the storage cell CPUs are not working too hard, but I am a big fan of sweating expensive assets, so the videos were more than a little eye opening to me, as to how little use those storage cell CPUs were getting, in some cases.

    Every scenario is going to be different – for example, I’m aware that we have lots of tables with many columns, where users will often only select a handful of them – that’s how their data works – and thus, the offload processing of smart projection would work very well for us…but whether we’re approaching 96.5% reduction is certainly questionable!

  3. @Jeff Moss : The best way to prevent imbalances is to just go with a large RAC cluster and join the 21st century on modern storage options. Either way you’re buying into RAC. RAC is the tier that can process *all* code so just go with RAC and don’t build a bottlenecked storage solution.

  4. Kevin: Now if you had taken the power consumption argument in your talk, there would definitely been no argument from me. 🙂

    Cheers

    Tim…

  5. > Now if you had taken the power consumption argument in your talk, there would definitely been no argument from me.

    Tim, sorry for the petulance, but aren’t some things just *that* obvious? If I’m showing you a 14kW rack that has the majority of CPUs burning but not running code can’t one deduce how wasteful (expensive) that is?

  6. Well, to be fair I have no idea how the power consumption of the Exadata storage grid compares to equivalent sized storage devices from other vendors. Also, I don’t know how the power consumption of these chips is affected by idling compared to running at full tilt.

    I’m sure I could Google the ratings, but as I said, this wasn’t your argument in the videos, so I never really followed that line of thought.

    Perhaps a neat idea would be to do a power consumption comparison between Exadata and system with equivalent RAC nodes and “modern storage” just to spell it out for us hardware gumbies.

    Cheers

    Tim…

  7. @Tim – then you get on to the other article Kevin wrote about how many non Exadata RAC licenses you need to match Exadata performance…and the only correct answer to that is, as Kevin said, “it depends”, because every scenario is different.

    So, if you can’t easily work out how much non Exadata kit you need to compare to it, then you can’t work out how much power/cooling you need and thus no comparison is possible.

  8. Jeff: I was thinking more of comparable RAC nodes and some good storage. I know this would not necessarily give the same performance for the same load, but it would be an easy comparison to make. 🙂

    Cheers

    Tim…

  9. @Tim – what does “equivalent” or “comparable” mean in in this instance though?

    In any case, It’s still difficult to do the comparison on the storage level…what HCC compression ratio would you use for Exadata? Presumably, for a larger ratio, that means you’d need more racks of the normal storage to compare against it, meaning more power/AC requirements for that approach?

    That’s my point, I guess…the two things are apples and oranges, in terms of their approach and comparing them is then tricky…I’ve had first hand experience of this over the past six months during a DW upgrade project comparing an Exadata X2-2 against a HP DL980 RAC setup with traditional modern storage (P2000).

  10. @Jeff I wasn’t talking about equivalent performance. Equivalent hardware on the RAC nodes. Same CPUs, memory etc. So the only big different between the systems is the storage.

    I agree the workload will have a big bearing on the storage. If your workload can use all the ExaMagic, then regular storage is going to have a tough time. I would have though that for OLTP consolidation there would be less difference, since it seems like it doesn’t lend itself to the secret sauce so well. Oracle are pushing the OLTP consolidation on to Exadata angle pretty hard.

    I wasn’t suggesting Kevin give us a “comparable” spec machine (for all the reasons mentioned). It was just a ballpark thing so we could get a feel for the amount of power wasted by the idle CPUs.

    Cheers

    Tim…

  11. @Tim – I’m probably on a different wavelength with this conversation…sorry, not trying to be difficult! 😉

    I only think about DW usage…I spend most of my time in that type of environment. Seems to me that all the ExaMagic (have you trademarked that one? 😉 ), other than Flash Cache, are geared up for helping DW scenarios, rather than OLTP…and there are numerous alternatives in the Flash cache arena.

    Cheers
    Jeff

  12. @Jeff : You’re not being awkward. I’m just not explaining my point well. My problem, not yours. 🙂

    Yes. I think you are looking at it from a DW performance perspective like, “How can I get similar performance to Exadata without buying Exadata?” That is not at all what I was talking about. I was just talking about a crappy example to show the difference in power consumption between the Exadata storage grid and a.n.other storage solution. Obviously that would have to include a gazillion caveats that it doesn’t guarantee same performance.

    Yes. I agree that Exadata secret sauce seems very focused on DW and maybe some of the batch elements of OLTP/Hybrid systems, which is why I’m not entirely convinced it makes sense for OLTP consolidation. If you are outside the DW/BI sweet spot, I remain to be convinced about Exadata.

    Cheers

    Tim…

  13. @Tim – Heh heh…was gonna say the same thing, i.e. I’m not explaining myself well! No worries…I think I was purely focusing on those gazillion caveats…just coming to the end of our decision making process at current client and I’m a bit “keen” on challenging back on stuff.

    Think we’ll be going HP here, for many reasons, not all of which I can share…but jury is still out. Either way, can’t wait to get my hands on the new kit…either of which, will be at least 10x the CPU and 20x the IO capability of the incumbent system – even my bad SQL will run quickly! 😉

  14. @Jeff When you read people saying things like, “We got X times performance improvement with Exadata”, they neglect to tell you what kit they were running on before. In many cases I’m sure a simple hardware refresh would have yielded pretty impressive results by itself.

    Cheers

    Tim…

  15. @Tim :

    “Yes. I agree that Exadata secret sauce seems very focused on DW ”

    The ***only*** OLTP feature in Exadata is Smart Flash Log and that is only there because Exadata cannot handle random writes. IORM can be considered relevant to OLTP in so much as you might be able to protect some service level for OLTP while running DW/BI on the same config.

    Oracle has done nobody any favors by suggesting Exadata is an OLTP specialized product just because it has a read cache (Exadata Smart Flash Cache).

  16. …so as I figured, we ended up with the decision being to take the HP kit, not Exadata.

    We’re looking at three DL980 G7 servers RAC’d up, each of which has eight, 8 core E7-2830 processors and 1TB of memory. Then we’re looking at about 35 HP P2000 MSA storage trays running RAID 10, with 24×2.5″ SAS drives. Ten dual port FC8 HBAs per DL980….then multiply that lot by two for DR 😉

    Red Hat Linux 6 OS + ASM for volume management.

    I’m just looking into SLOB and other benchmarks now…getting ready for when the big beast arrives, so I can give it a kicking!

    Happy days!

    😉

  17. @Jeff : Jealous! That’s going to run like shit off a shovel. 🙂

    I like SLOB. I still like Orion too. I know the numbers are not as *real*, but it’s nice to see what the storage can do in *relative isolation*. I also like using Swingbench and cranking it up. 🙂

    Cheers

    Tim…

  18. @Tim

    I’ll run both SLOB and ORION, just for completeness and comparison.

    Yes, a shovel developed at McLaren and that has been polished to within an inch of it’s life 😉

    Arrrrjjjjjjyeahhh Baby!!

    Cheers
    Jeff

  19. It is all very interesting, but sort of old way of thinking. Why SAP with HANA can put all the data in memory and sort out issues of passing the data and instructions on the processor level directly, rather than worrying about storage level bottlenecks? This approach changes everything. Just think about a single server with 12 TB memory, all dedicated to data storage and processing providing at least 150k IOPS for the price of just a server. This is just economy that will decide how the future architecture will look like.

  20. @Darek : you have be careful not to oversimplify the argument. If you give Oracle 12TB of memory it’s going to run pretty fast too.

    There a number of factors that go into your choice. You can’t simply replace Oracle with another engine for many apps.

    I’m not saying this won’t change eventually, but we can’t go ripping out Oracle from our ERPs and expect them to work properly on MongoDB overnight.

    Change is gradual.

    Cheers

    Tim…

Comments are closed.