I’ve done several posts on how to design a cost-effective SMB cluster, but recently with the discovery of the HP P2000 G3 SAS SAN it has gotten MUCH more cost effective. While the HP SAS SAN is not entirely new, the updated bundles and features provided with the G3 model have really made it a viable option.
If you haven’t already seen my other articles I would encourage you to add these to your reading list. If you browse through them you will see the general design and the price point, this will help you to understand the added simplicity and lower price point of the design that will be presented in this article.
http://jpaul.me/?p=402 –Recipe for SMB Cluster’s (version 1.0)
http://jpaul.me/?p=869 — HP P2000/MSA G3: A First Look for SMB’s
In the first article I presented a design that utilized an iSCSI version of the P2000 SAN, while still a great SAN, it requires you to have two switches for redundancy. Depending on the switches you get this can add a significant amount of money to the bill. This also means that you need to configure separate VLAN’s for iSCSI traffic as well as rely on more devices that could fail.
Enter HP P2000 G3 SAS… it will be the key to version 2.0 of the SMB Cluster Design.
This article is going to be a little different from the first version though… I’m not going to go through the year by year building process, and I’m not going to lay out all of the costs or the software required. This article is strictly about the hardware, and how it’s going to save you money over the version 1.0 article. So to get started go read that first link, then come back to this one.
One of the largest costs involved with the version 1.0 of the SMB cluster design is the network side… meaning the Cisco switches. You could have cut them out in favor of something cheaper, but you would have probably created a single point of failure on the iSCSI side of things. With the SAS version of the P2000 we no longer need network connectivity for iSCSI because our hosts are directly connected to each of the SAS controllers. The downside is that we only have 8 SAS ports, and if you want redundancy then you can only connect up to four hosts. Since we are talking SMB, we will assume you will have no more than 3 ESXi servers because the VMware SMB packages only allow licensing for 6 processors anyhow.
So let’s take a look at what our infrastructure needs to look like: (excuse the drawing… it seems as though Visio isn’t on my laptop right now)
This drawing basically shows how the SAN connects to both your ESX servers as well as your Veeam Backup server. Everything is SAS, the only reason we have ethernet is for management of the SAN, and other normal network traffic from servers to clients. This setup still allows you to do “Direct Attached” backups with Veeam too.
Data transfer rates are MUCH faster than with iSCSI, in the order of 300-400MBps, and Direct SAN backups are at about the same rate… the only thing that slows it down is the CPU bottleneck in the Veeam server. If you were to turn off compression you would get the same rate most likely.
While the picture shows a Cisco 3750 switch, you could easily use any gigabit switch… and you really only “have” to have one. Yes, it is a single point of failure, but if its only programmed as a simple L2 switch you could just have a cold standby in case of a failure.
So does it support HA and DRS?
Yes. The SAS SAN fully supports all of the features of VMware. Right now the P2000 line does not include VAAI support, but I’m told they are working on the code, and I’m hoping to get to test it out! (I’ll be sure to blog about it if I get the chance to test it)
To the VMware servers the HBA’s show up just like a Fiber Channel HBA would, and they list out the controllers and any LUN’s that you have presented to it.
So what the bill?
Well the SAS san comes in a few thousand dollars cheaper than the iSCSI San. In the first article, I used $20k as a generic amount for the SAN and I think I based that on a couple TB of usable space… with the SAS P2000 G3 you can get the SAN AND 24-300 GB 10k SAS drives bundled for $18k (MSRP)! That’s 7.2TB of RAW space!
You will also need HBA’s since we aren’t using ethernet, but they are only $200 per card. SAS cables are about 120 bucks each as well… and you’ll need two per server and one for the Veeam server (two if you want it to be redundant too). So while there are some added costs to go with SAS on the connection side, the savings are high enough that it not only offsets the cost of the cables and HBA’s, it also still brings the total price in lower then with an iSCSI SAN.
If you want to compare it to the Fiber Channel SAN then you are almost in two different ballparks. The HBA’s alone for the 8GB Fiber Channel are $1200 each. Then figure a couple grand for fiber channel switches (you’ll need two), plus fiber cables. Total your talking alot more than the SAS SAN. So even if the Fiber Channel P2000 was the same cost as the SAS (which it’s not) you would still have an additional 10k (estimate) in all the additional Fiber Channel gear you need to make it work.
As for the Veeam Backup server, there are a lot of tricks you can do with an HP server to bring the price of “big” storage for the disk to disk backup down. Maybe one of these days I will do a post on how we do it.