Difference between revisions of "Set up an Oracle Cluster File System"
imported>ThorstenStaerk |
imported>ThorstenStaerk |
||
Line 23: | Line 23: | ||
= TroubleShooting = | = TroubleShooting = | ||
− | == Unable to access cluster service == | + | == Unable to access cluster service while trying to initialize cluster == |
+ | |||
+ | === Symptom === | ||
+ | When trying to mount the cluster volume you get the error message | ||
+ | node1:~ # mount /dev/sdb /mnt | ||
+ | mount.ocfs2: Unable to access cluster service while trying initialize cluster | ||
+ | |||
+ | === Reason === | ||
+ | Once the reason was the cluster service had not been started on all nodes. | ||
+ | |||
+ | === Solution === | ||
+ | Start the cluster service on all nodes: | ||
+ | <pre> | ||
+ | node2:~ # /etc/init.d/o2cb status | ||
+ | Driver for "configfs": Not loaded | ||
+ | Driver for "ocfs2_dlmfs": Not loaded | ||
+ | node2:~ # /etc/init.d/o2cb enable | ||
+ | Writing O2CB configuration: OK | ||
+ | Loading filesystem "configfs": OK | ||
+ | Mounting configfs filesystem at /sys/kernel/config: OK | ||
+ | Loading stack plugin "o2cb": OK | ||
+ | Loading filesystem "ocfs2_dlmfs": OK | ||
+ | Mounting ocfs2_dlmfs filesystem at /dlm: OK | ||
+ | Cluster not known | ||
+ | node2:~ # /etc/init.d/o2cb status | ||
+ | Driver for "configfs": Loaded | ||
+ | Filesystem "configfs": Mounted | ||
+ | Stack glue driver: Loaded | ||
+ | Stack plugin "o2cb": Loaded | ||
+ | Driver for "ocfs2_dlmfs": Loaded | ||
+ | Filesystem "ocfs2_dlmfs": Mounted | ||
+ | </pre> | ||
+ | |||
+ | == Unable to access cluster service while creating node == | ||
=== Symptom === | === Symptom === |
Revision as of 09:56, 30 April 2010
You want to start your clustering experiences with an Oracle Cluster File System. Here is how. This is an example using SUSE Linux 11.2. As shared storage we use an iscsi storage.
Contents
Installation
- We assume here they are named node1 and node2 and have the IP addresses 192.168.0.11 and 192.168.0.12.
On both nodes, configure your iscsi initiator, install everything that yast proposes:
yast2 iscsi-client
On both nodes, install ocfs2 software
yast -i ocfs2-tools ocfsconsole ocfs2-tools-o2cb
On both nodes, make the cluster services start at boot
/etc/init.d/o2cb enable
You get a message "cluster not known". That is okay.
- Start ocfs2console
- Choose Cluster->Configure Nodes...
- write the cluster nodes in with their local host names (what the command "hostname" returns).
- Choose Cluster->Propagate Configuration...
TroubleShooting
Unable to access cluster service while trying to initialize cluster
Symptom
When trying to mount the cluster volume you get the error message
node1:~ # mount /dev/sdb /mnt mount.ocfs2: Unable to access cluster service while trying initialize cluster
Reason
Once the reason was the cluster service had not been started on all nodes.
Solution
Start the cluster service on all nodes:
node2:~ # /etc/init.d/o2cb status Driver for "configfs": Not loaded Driver for "ocfs2_dlmfs": Not loaded node2:~ # /etc/init.d/o2cb enable Writing O2CB configuration: OK Loading filesystem "configfs": OK Mounting configfs filesystem at /sys/kernel/config: OK Loading stack plugin "o2cb": OK Loading filesystem "ocfs2_dlmfs": OK Mounting ocfs2_dlmfs filesystem at /dlm: OK Cluster not known node2:~ # /etc/init.d/o2cb status Driver for "configfs": Loaded Filesystem "configfs": Mounted Stack glue driver: Loaded Stack plugin "o2cb": Loaded Driver for "ocfs2_dlmfs": Loaded Filesystem "ocfs2_dlmfs": Mounted
Unable to access cluster service while creating node
Symptom
In ocfs2console when adding nodes you get the error message
o2cb_ctl: Unable to access cluster service while creating node Could not add node node1
Solution 1
The following solution worked once: Delete /etc/ocfs2/cluster.conf
rm /etc/ocfs2/cluster.conf
Solution 2
The following solution worked once: Write /etc/ocfs2/cluster.conf manually:
node: name = node1 cluster = ocfs2 number = 0 ip_address = 192.168.0.11 ip_port = 7777 node: name = node2 cluster = ocfs2 number = 1 ip_address = 192.168.0.12 ip_port = 7777 cluster: name = ocfs2 node_count = 2
you see files only on one node
Symptom
You have your filesystem mounted and add a file on one node, but do not see it on the other node.
Reason 1
In one case the reason for this was that the user had forgotten to "propagate configuration" AND node2 could not be reached over the network.