With the recent news that the latest version of VirtualBox now supports shared disks, I thought I better give it a go and see if I could do a RAC installation on it. The good news is it worked as expected. You can see a quick run through here:
This is pretty good news as that was the last feature that tied me to VMware Server. I’ve now moved pretty much everything I do at home on to VirtualBox and it’s working fine.
It’s worth taking a little time looking at the VBoxManage command line. Some of the operations, like creating the shared disks, have to be done from the command line at the moment. It’s also handy for running VMs in headless mode if you don’t want the GUI screen visible all the time.
I finally got round to trying an NFS installation of 11gR2 RAC.
No real surprises here. It all seems a little simpler when using NFS, but it was cronically slow on my crappy kit.
A couple of new articles have crept out recently. The first is me pretending to understand DNS.
I used this configuration in place of the “/etc/hosts” in my VMware RAC installation and it worked great.
The second is a brief romp through edition-based redefinition.
This article started to get really big and feel like a rewrite of the manual, so I stripped most of it out and really just left a couple of examples of how it can be used. I figure this is enough to give you a feel for what it can do, but isn’t as daunting as working through the manuals if all you want is a quick taste.
I’ve seen edition-based redefinition described as a killer feature, but I’m not so sure myself. Don’t get me wrong, I think it is really cool, but “really cool” doesn’t always become “frequently used”. As I was playing with it I had flashbacks to Workspace Management introduced in 9i. I’ve spoken to a lot of poeple over the years and very few even remember it exists, let alone use it.
There is nothing conceptually difficult about edition-based redefintion, but there are potentially a lot of working parts involved and therefore a lot of scope for human error and/or confusion. I’m sure some people have been praying for something like this for a long time, and others will remain blissfully ignorant of it forever. It would be interesting to gaze into a crystal ball and see how much this stuff is used in a few years time (and get some lottery results).
I’ve taken my first tentative steps into 11gR2 RAC and it was a big surprise.
11gR2 RAC feels very different to 11gR1 RAC. I can imagine quite a few people wanting to upgrade from 11gR1 thinking it will be trivial and getting a rude awakening…
The Grid Infrastructure (Clusterware + ASM) seems more complicated. There are more installation options, more prerequisites, more background processes and a bigger memory requirement…
I typically install 11gR1 RAC on VMware using 1G of RAM per VM. If you try that with 11gR2 you will get to the end of the Grid Infrastructure installation and have nothing left. The minimum recommendation for Grid Infrastructure alone is 1.5G, but if you want the RAC DB as well you are talking 2.5G. It actually worked fine with 2G of RAM allocated to each VM, but this is a whopping increase compared to 11gR1.
At this point I feel like I know nothing about 11gR2 RAC, but it certainly doesn’t feel like a patched version of 11gR1. If this had been released as 12g I would have still have been surprised by the level of change.
So over the next few days I’m expecting the dust to settle, my residual fear of all things new to subside and I’ll probably change my opinion completely and think it’s all the same as it was before…
PS. Please don’t try this installation on your 32-bit Windows laptop with 2G of RAM then write to me complaining it doesn’t work and telling me the article is rubbish…
I mentioned in a previous post I had taken the plunge and upgraded to VMware Server 2 on my laptop. Now I’ve also upgraded my main machine at home and it seems to be working fine. Probably the most complicated thing I run at home is a virtual RAC, so I wrote a new article to document the installation:
From a user point of view, the only difference between VMware Server 1.x and 2 is the new web-based managment interface. The VM setup itself is almost identical and as you would expect, so is the Oracle installation.
So far so good.
It looks like those possible VMware ESX articles I mentioned yesterday are now on the VIOPS site.
If you’re interested in the enterprise VMware kit it’s worth taking a look at the site. New stuff is being added all the time. I think it’s official launch is at vmworld2008 in about 3 weeks.
I’ve also added an overview article for the ESX Server installation to my website.
I guess you would have to be in a coma to not notice that Oracle 11g is now out for Windows 32-bit.
To celebrate this release I’ve done an 11g RAC on Windows 2003 article, which is an update of my 10g RAC on Windows 2003 article. With both installations, if you get the networking stuff sorted, the installs are a breeze. Miss any steps out and you’re in for a world of hurt.
I think I’m finally getting myself back on track. It’s been an unusual few weeks though.
I spent quite some time complaining that I couldn’t think of anything to write about and hoping 11g would inspire me. Since the release of 11g and the inevitable installation articles, I’ve felt rather lethargic again. Getting into a new version of the database is always a bit odd. For me it’s a combination of excitement and denial…
Well, I’ve finally updated the VMware RAC article for 11g, which was dependent on a RAM upgrade. It works fine, but very slow. Unless you want the ASM experience, I think the NFS RAC method is a lot cleaner and easier.
I’ve also started to plug through the DB new features. The first thing I played around with was Partitioning. I’m hoping I can keep up the momentum for a while. I wanted to sit the 11g beta OCP exam, but I know so little about 11g at the moment it seems really unlikely I’ll get to grips with it before the beta exam closes. It’s a shame really because it’s nice to be involved in the process.
I finished reading Vittorio the Vampire. Of all the Anne Rice books I’ve read I think it’s the weakest. It’s all a bit flowery and “mills and boon”. Not my cup of tea.
On a more serious note, my 5 year old nephew was in A&E last night with pneumonia. I was with him all day yesterday and although he wasn’t well, we didn’t suspect something so serious. A bit of Calpol and he was up attacking a balloon octopus with a plastic sword… Things got worse through the night which resulted in the A&E visit and the diagnosis. The doctor was surprised he was so active and chirpy considering. Tough as old boots! He’s back at home now and all looks good, but it’s very unnerving. I suspect within a couple of days he will be back in full effect.
As a follow-on from my 10g RAC on NFS article, I thought it would be nice to have an 11g RAC on NFS article. The process is very similar, with a couple of exceptions:
- The Virtual IP Configuration Assistant (vipca) runs in silent mode without any problems now. Under 10g, you had to use a “real” public IP address for this to work. Under 11g it now works with private IPs like “192.168.x.x” etc.
- Oracle 11g includes a Direct NFS Client for “optimized” Oracle over NFS performance. I don’t have the relevant kit to do a performance comparison, so I don’t know if it’s worth it or not. If someone has some figures for this I would be interested to hear them.
Update: For information on Direct NFS Client performance look here.
I’ve mentioned it before, but I really like Kevin Closson‘s blog. For some time he’s been evangelizing about Oracle RAC over NFS, so I thought I would give it a go to see what it’s all about and here is the result.
Oracle 10g RAC On Linux Using NFS
I was only using two machines, and I didn’t have access to a NAS that supported NFS, so I was forced to use one of the RAC nodes as my NFS server. I know it’s a dumb idea, but it proves the technology.
If you are just playing about, the nice thing about this solution is you don’t need to worry about “real” shared storage. I prefer it to the VMware approach because you don’t need a single server with loads of memory to fake two virtual machines and the shared storage. Finding two poor machines is always easier than 1 good one.