Difference between revisions of "Set up an Oracle Cluster File System"
imported>ThorstenStaerk |
(Undo revision 17615 by 95.113.53.139 (talk)) |
||
(3 intermediate revisions by 2 users not shown) | |||
Line 6: | Line 6: | ||
: We assume here they are named node1 and node2 and have the IP addresses 192.168.0.11 and 192.168.0.12. | : We assume here they are named node1 and node2 and have the IP addresses 192.168.0.11 and 192.168.0.12. | ||
− | On both nodes, configure your iscsi initiator, install everything that yast proposes: | + | * On both nodes, configure your iscsi initiator, install everything that yast proposes: |
yast2 iscsi-client | yast2 iscsi-client | ||
− | On both nodes, install ocfs2 software | + | * On both nodes, install ocfs2 software |
yast -i ocfs2-tools ocfsconsole ocfs2-tools-o2cb | yast -i ocfs2-tools ocfsconsole ocfs2-tools-o2cb | ||
− | On both nodes, make the cluster services start at boot | + | * On both nodes, make the cluster services start at boot |
chkconfig o2cb on | chkconfig o2cb on | ||
/etc/init.d/o2cb enable | /etc/init.d/o2cb enable | ||
You get a message "cluster not known". That is okay. | You get a message "cluster not known". That is okay. | ||
+ | |||
+ | * check that each node can reach each other | ||
+ | node1:~ # ping node2 | ||
+ | PING node2 (192.168.0.12) 56(84) bytes of data. | ||
+ | 64 bytes from node2 (192.168.0.12): icmp_seq=1 ttl=64 time=1.09 ms | ||
+ | |||
+ | node2:~ # ping node1 | ||
+ | PING node1 (192.168.0.11) 56(84) bytes of data. | ||
+ | 64 bytes from node1 (192.168.0.11): icmp_seq=1 ttl=64 time=1.09 ms | ||
* Start ocfs2console | * Start ocfs2console |
Latest revision as of 07:20, 15 December 2014
You want to start your clustering experiences with an Oracle Cluster File System. Here is how. This is an example using SUSE Linux 11.2. As shared storage we use an iscsi storage.
Contents
Installation
- We assume here they are named node1 and node2 and have the IP addresses 192.168.0.11 and 192.168.0.12.
- On both nodes, configure your iscsi initiator, install everything that yast proposes:
yast2 iscsi-client
- On both nodes, install ocfs2 software
yast -i ocfs2-tools ocfsconsole ocfs2-tools-o2cb
- On both nodes, make the cluster services start at boot
chkconfig o2cb on /etc/init.d/o2cb enable
You get a message "cluster not known". That is okay.
- check that each node can reach each other
node1:~ # ping node2 PING node2 (192.168.0.12) 56(84) bytes of data. 64 bytes from node2 (192.168.0.12): icmp_seq=1 ttl=64 time=1.09 ms
node2:~ # ping node1 PING node1 (192.168.0.11) 56(84) bytes of data. 64 bytes from node1 (192.168.0.11): icmp_seq=1 ttl=64 time=1.09 ms
- Start ocfs2console
- Choose Cluster->Configure Nodes...
- write the cluster nodes in with their local host names (what the command "hostname" returns).
- Choose Cluster->Propagate Configuration...
TroubleShooting
Unable to access cluster service while trying to initialize cluster
Symptom
When trying to mount the cluster volume you get the error message
node1:~ # mount /dev/sdb /mnt mount.ocfs2: Unable to access cluster service while trying initialize cluster
Reason
Once the reason was the cluster service had not been started on all nodes.
Solution
Start the cluster service on all nodes:
node2:~ # /etc/init.d/o2cb status Driver for "configfs": Not loaded Driver for "ocfs2_dlmfs": Not loaded node2:~ # /etc/init.d/o2cb enable Writing O2CB configuration: OK Loading filesystem "configfs": OK Mounting configfs filesystem at /sys/kernel/config: OK Loading stack plugin "o2cb": OK Loading filesystem "ocfs2_dlmfs": OK Mounting ocfs2_dlmfs filesystem at /dlm: OK Cluster not known node2:~ # /etc/init.d/o2cb status Driver for "configfs": Loaded Filesystem "configfs": Mounted Stack glue driver: Loaded Stack plugin "o2cb": Loaded Driver for "ocfs2_dlmfs": Loaded Filesystem "ocfs2_dlmfs": Mounted
Unable to access cluster service while creating node
Symptom
In ocfs2console when adding nodes you get the error message
o2cb_ctl: Unable to access cluster service while creating node Could not add node node1
Solution 1
The following solution worked once: Delete /etc/ocfs2/cluster.conf
rm /etc/ocfs2/cluster.conf
Solution 2
The following solution worked once: Write /etc/ocfs2/cluster.conf manually:
node: name = node1 cluster = ocfs2 number = 0 ip_address = 192.168.0.11 ip_port = 7777 node: name = node2 cluster = ocfs2 number = 1 ip_address = 192.168.0.12 ip_port = 7777 cluster: name = ocfs2 node_count = 2
you see files only on one node
Symptom
You have your filesystem mounted and add a file on one node, but do not see it on the other node.
Reason 1
In one case the reason for this was that the user had forgotten to "propagate configuration" AND node2 could not be reached over the network.