Clustering,High Availability,How To-s,Linux August 5, 2012 9:02 pm

Ultimate NAS How-To

Step 5: Making Your Cluster “Production-Ready”

So you have a NAS cluster running NFS, SMB and FTP, but it’s not production ready yet. Here are some items that I’ll leave you explore on your own:

  • Adding additional nodes
  • Adding additional IP resources
  • Adding a proper stonith configuration and testing
  • Implementing monitoring and notifications
  • Configuring additional cluster properties
  • Configuring filesystem permissions and access controls

My suggestions primary suggestions are that you have at least 3 nodes and that you implement multiple IPAddr2 resources. You can then weight them (read up here) and add additional configuration (namely placement-strategy and resource-stickiness) to ensure that your IP resources are balanced across available nodes.

Also you MUST make sure you properly implement a good stonith. Since my cluster running on all VMs, I can use fence_xenapi, but fencing/stonith is critical to your system stability. With 3 nodes, a properly configured stonith and “stonith-enabled” set to true, you can ensure your cluster will operate with some stability and integrity. If you aren’t that familiar with stonith and fencing, see here for more details and information.

OK, I hope this how-to was helpful. As a matter of fact, if it was, please leave a comment :).

Thanks,

-Guru

Tags:

7 Comments

  • Excellent.

    But I would like to see a samba ctdb only from you.

    Possible ? 🙂

    • I could, but samba already has a pretty good explanation of how to do it at ctdb.samba.org. Not to mention, there are many reasons why you would not want to run ctdb, samba and a cluster filesystem without a full blown cluster-stack.

  • Hi,

    When I try and apply the CTDB patch i get the following:

    [root@cluster1 heartbeat]# cat ~/ctdb.patch | patch
    patching file CTDB
    Hunk #1 succeeded at 78 with fuzz 2 (offset -3 lines).
    patch: **** malformed patch at line 34: @@ -371,6 +391,11 @@

    Any suggestions ?

    I am using the latest resource agents from GIT as I am using GlusterFS instead of fighting with DRBD / OCFS2.

    I am also running directly on Oracle Linux rather than Centos with the kernel patched in.

    Your guide has worked for the majority of it so far with a few teeth gnashes between parts 🙂

    Cheers,

    Kane.

    • Hey thanks for the comment and sorry for any troubles. I tried to test as much as possible lol.
      Perhaps its the formatting of the patch? Try this db link . Let me know if it works/doesn’t work for you.
      If you have time to elaborate, I’d love to hear about any other frustrations or problems you experiences.

      Thanks

  • That worked, thanks.

    Most of my problems were getting the ocfs2_controld.pcmk to come up, it would install each time but pacemaker could never start it. dlm_docntold.pcmk was running but there was no /dlm for ocfs2 to attach onto.

    Otherwise it was silly things like DRDB tools (8.13) and kernel mod (8.11) are different in Oracle so when you yum update you then have to downgrade the tools or exclude them from the update.

    I have to document the build I am doing for work so I will drop you a copy of it, GlusterFS once running seems to have a lot less to go wrong but of course only time and testing will tell.

    Cheers

    Kane.

  • MINECRAFT FOR LIFE DONT EVN TRY TRI 360-NOSCOPE ME BRUHHHH IM THE QUEEN OF MINCRAFT… MINECRAFT BLESSES U AND MINECRAFT WILL BE WITH U

Leave a reply

required

required

optional


Time limit is exhausted. Please reload the CAPTCHA.

css.php