Clustering,High Availability,How To-s,Linux August 5, 2012 9:02 pm

Ultimate NAS How-To

Step 4: Testing Your NAS

NFS Test:

I exported a directory from the nas-cluster then mounted it on another Linux server with the following command:
mount -o rw,bg,proto=tcp,hard,intr,rsize=524288,wsize=524288 172.24.100.15:/srv/samba/shares/data/nfs/exports/ mnt.
I then executed a fio script against that directory for 360 seconds. After the script ran for a few minutes, I halted the node running the IP resource. Here is what fio reported:

Jobs: 8 (f=8): [rrrrrrrr] [26.9% done] [5677K/0K /s] [1386 /0  iops] [eta 04m:24
Jobs: 8 (f=8): [rrrrrrrr] [27.1% done] [5632K/0K /s] [1375 /0  iops] [eta 04m:23
Jobs: 8 (f=8): [rrrrrrrr] [27.4% done] [5681K/0K /s] [1387 /0  iops] [eta 04m:22
Jobs: 8 (f=8): [rrrrrrrr] [27.7% done] [5607K/0K /s] [1369 /0  iops] [eta 04m:21
Jobs: 8 (f=8): [rrrrrrrr] [28.0% done] [5943K/0K /s] [1451 /0  iops] [eta 04m:20
Jobs: 8 (f=8): [rrrrrrrr] [28.3% done] [5459K/0K /s] [1333 /0  iops] [eta 04m:19
Jobs: 8 (f=8): [rrrrrrrr] [28.5% done] [5857K/0K /s] [1430 /0  iops] [eta 04m:18
Jobs: 8 (f=8): [rrrrrrrr] [28.8% done] [5263K/0K /s] [1285 /0  iops] [eta 04m:17
Jobs: 8 (f=8): [rrrrrrrr] [29.1% done] [0K/0K /s] [0 /0  iops] [eta 04m:16s]    
Jobs: 8 (f=8): [rrrrrrrr] [38.5% done] [2351K/0K /s] [574 /0  iops] [eta 03m:42s
Jobs: 8 (f=8): [rrrrrrrr] [38.8% done] [4718K/0K /s] [1152 /0  iops] [eta 03m:41
Jobs: 8 (f=8): [rrrrrrrr] [39.1% done] [4882K/0K /s] [1192 /0  iops] [eta 03m:4

Down for about 45 seconds about.. Not bad – though it would be improved. What about CIFS and ftp you say? I’ll try to get those tested at put up some data at a later date – that is, if you don’t beat me to it.

Tags:

7 Comments

  • Excellent.

    But I would like to see a samba ctdb only from you.

    Possible ? 🙂

    • I could, but samba already has a pretty good explanation of how to do it at ctdb.samba.org. Not to mention, there are many reasons why you would not want to run ctdb, samba and a cluster filesystem without a full blown cluster-stack.

  • Hi,

    When I try and apply the CTDB patch i get the following:

    [root@cluster1 heartbeat]# cat ~/ctdb.patch | patch
    patching file CTDB
    Hunk #1 succeeded at 78 with fuzz 2 (offset -3 lines).
    patch: **** malformed patch at line 34: @@ -371,6 +391,11 @@

    Any suggestions ?

    I am using the latest resource agents from GIT as I am using GlusterFS instead of fighting with DRBD / OCFS2.

    I am also running directly on Oracle Linux rather than Centos with the kernel patched in.

    Your guide has worked for the majority of it so far with a few teeth gnashes between parts 🙂

    Cheers,

    Kane.

    • Hey thanks for the comment and sorry for any troubles. I tried to test as much as possible lol.
      Perhaps its the formatting of the patch? Try this db link . Let me know if it works/doesn’t work for you.
      If you have time to elaborate, I’d love to hear about any other frustrations or problems you experiences.

      Thanks

  • That worked, thanks.

    Most of my problems were getting the ocfs2_controld.pcmk to come up, it would install each time but pacemaker could never start it. dlm_docntold.pcmk was running but there was no /dlm for ocfs2 to attach onto.

    Otherwise it was silly things like DRDB tools (8.13) and kernel mod (8.11) are different in Oracle so when you yum update you then have to downgrade the tools or exclude them from the update.

    I have to document the build I am doing for work so I will drop you a copy of it, GlusterFS once running seems to have a lot less to go wrong but of course only time and testing will tell.

    Cheers

    Kane.

  • MINECRAFT FOR LIFE DONT EVN TRY TRI 360-NOSCOPE ME BRUHHHH IM THE QUEEN OF MINCRAFT… MINECRAFT BLESSES U AND MINECRAFT WILL BE WITH U

Leave a reply

required

required

optional


Time limit is exhausted. Please reload the CAPTCHA.

css.php