There are two “things” that I often have to give some credit to for helping me out over the years. One is Google, where I’ll unashamedly say that 1/4 of my brain is stored. The other is the Open Source software community.
The OSS community deserves a shout out because one of the MANY challenges small [IT] organizations face is cost efficient systems modernization. I mean let’s face it; technology is the great equalizer. When a small organization is able to get their hands on software employed by their big fish in their sector, they can compete at much higher level in the market place. Also, there is the fact that some technologies and features have become key to running a stable, efficient and secure infrastructure. For example, take server and storage virtualization. These two technologies have become the cornerstones of the modern datacenter but they doesn’t come cheap. Sometimes a vendor will appear to offer SMBs an “affordable” solution along these lines, but more often than not, these products are just stripped down, less feature-rich versions of their premier products. Enter OSS to our rescue!
So if you’ve spent anytime reading some of the other posts on this blog, you may have realized that I have a lot of love for the Xen Cloud Project, otherwise known as XCP.
XCP is of course the open source, free version of XenServer. In my opinion, the project is all that and a bag of chips, but on it’s own, XCP doesn’t fully deliver the “Cloud”. The ability to pool together low cost, but powerful servers certainly goes a long way, but you need a scalable and performant storage solution to tie everything together.
Now normally, anything marketed as “scalable” and “performant”, particularly when it comes to storage, translates into prohibitively high costs for small shops, but a little known open source project is smashing that concept.
The project’s name is SCST and for those unfamiliar, it is one of two competing multi-fabric storage target subsystems for Linux. SCST is not just “SAN” software. It provides the necessary framework for building a storage virtualization platform that gives your storage infrastructure capabilities that you’d normally have to pay vendors like EMC, Netapp or Dell a pretty penny for. Capabilities like:
- Thin Provisioning
- Read-Only Snapshots
- Online LUN Expansion
- Off Host Backup
- High Availablity
- I/O Grouping
- LUN Masking and Security Groups
Now some of these features are not native to SCST but are capabilities available to Linux. Keep in mind that SCST on it’s own is powerful, but what takes it to the next level is the fact that Linux is so flexible and so powerful. When you combine SCST with other projects like Samba, DRBD, PaceMaker, CLVM and OCFS2, for example, as I mentioned above, you have a powerful storage platform that rivals what you’d have to pay the big three tens of thousands of dollars for.
I’ve been using SCST for a couple of years and I’ve had great results. Performance is excellent and more importantly, it’s stable and reliable.
We primarily use Fiber Channel with multiple 1GB iSCSI channels as backup because as much as I’d like to move to 10GB Ethernet, the price per port for storage applications (especially when compared to 4GB Fiber Channel) is still too expensive. In addition, thanks to Moore’s law, there is abundance of lightly used, 4GB Fiber Channel equipment on the market. Go onto Ebay, Craigslist or wherever and you can generally pick up a 4GB FC equipment (switches and hbas) for a fraction of their debut price.
As far as management goes, SCST (in it’s native form) is all command line. It has a sysfs interface that allows you to perform administrative functions but also ships with a perl based tool called “scstadmin” which is pretty comprehensive. I mainly use scstadmin, and for tasks like creating LUNs and targets it is pretty straightforward. That said, nothing about managing a vanilla SCST system is streamlined. To create a target, enable it, add initiators, add LUNS and configure masking is no less than 5 commands. And this isn’t even counting the commands for the basic tasks that need to be performed outside of SCST such as creating and managing RAID arrays, filesystems or logical volumes. Also, SCST is not distributed in binary form which means you’ll be patching and recompiling your kernel in addition to building. This may be troublesome for some and because of the different versions and distributions of Linux, a patch may not apply properly because of existing vendor patches or you may be missing development libraries or headers so the compliation outcome may not be successful OR even worse, everything goes well with respect to patching, compiling and installing, but you hit bugs or performance issues because your kernel is either too old or too new. Fortunately, the OSS community has come to our rescue yet again.
A fella by the name of Marc A. Smith created a project over at Google Code called the “Enterprise Storage OS” (link) otherwise known as ESOS. ESOS is a Linux OS that is boots off of a usb flash drive. The image contains SCST and a number of different tools and drivers baked into it, tied together via a very nice text UI. The UI goes a long way towards streamlining the management of your storage server by putting almost everything you need in a single screen. Having the ability to manage your RAID configuration, then jump right into configuring targets is really nice and it seems like the TUI may evolve to allow you to configure DRBD which would be awesome. I’ve been running version 0.1-293 and while this isn’t a formal review, I have to say its very promising and the base features are solid. That said, upgrading an ESOS could use some streamlining and the TUI could further be improved by taking a wizard based approach to configuration for some tasks.
Well that’s it for this post. If you need SAN/Storage Virtualization capabilities and you don’t have a huge budget for anything other than hardware and you don’t mind a challenge or two, you should definitely look at SCST. I’ll probably be posting some bench marks and other data from a comparison I did between the Dell MD3600F and a system running ESOS. Stay tuned!