Comments for OCFS2 On Linux
thahu said...good document worked perfectly.. thanku TIM
do you need partions on all nodes?
I have partions as /dev/sda5 on all nodes.
The o2cb is running and online on all nodes.
Mounting on all nodes brings cluster heartbeat "active"
touching a file on /u01 did not produce a result on the other nodes.
It appears as if the devices are local to each server and not in a 'cluster'.
Are the local sda5 partions conflicting with the cluster?
Is this filesystem suppose to replicate data to local disks or is this 'simply' a better NFS?
Tim... said...The partitions MUST be on shared disks. There is no copying between disks. It's one set of disks shared between all nodes.
So you don;t have to partition disks on each node. It's one set of disks. As such, something done by one node should by definition be done on all nodes, as its shared disks. :)
ashoklvr said...Thanks a ton tim....very very usefull article
Suresh. said...Your Document is great. I tried using similar documentation available in other sites. I spent 2 days on them. Could not find proper answer. Then I came across yours. Its great. It is as easy eating a piece of cake.... Thanks for your simplified work.
I'm not sure if I missed something, but to get this to work, in ocfs2config I also had to go to cluster->propogate configuration on node 1 (answering a few questions re:ssh and root pw) then on the other nodes I had to load, and then online ocfs2.
Since then its been great...thanks for summarizing all the partial documentation out there for us so succintly.
Alan said...Hi Tim,
Is the install carried out aa the 'root' user or 'oracle ' user?
RPM installs are always performed as root. The "#" command prompt also suggests this.
Alan said...ok didnt make myself clear. I meant the ocfs2 console. is that root or oracle?
Once again, look at the prompt. It's "#", so that is the root user. Users other than root have a "$" prompt.
ora-ocfs said...Hi Tim, I'm trying to understand what it is that shares the data across disks; is it ocfs2 that shares the data or is ocfs2 installed into nodes that already have shared "clustered" disks in the 1st place?
If the latter, what did you use to cluster the disks? In your comment "The partitions MUST be on shared disks ..." what was used to share the disks?
Thanks for your very clear articles.
No data shared between disks. Each disk is shared. Shared in this context means physically connected to both machines.
Not enough room to discuss. More questions, ask on forum.
Will said...Hi Tim,
Great tutorial. I have just one question. Are there any risks involved using this setup in a production environment?
OCFS2 is a production ready and supported product, so the risks are similar to any other product you install in a production environment.
I would avoid it if I were using RAC, as having two cluster solutions on one box is a killer.
DO NOT ask technical questions here! They will be deleted!
These comments should relate to the contents of a specific article. Constructive criticism is good. Advertising and offensive comments are bad and will be deleted!