Storage Multipathing with HP P4000 and IX2

Last week I was discussing how I normally lay out a VMWare cluster so that it has no single point of failure with one of the storage architects for HP. He made some suggestions that got me thinking, so the next evening I decided to setup a couple of HP P4000 VSA’s and an ESXi lab on my laptop to test out storage multipathing, and how it could be used to reduce costs associated with redundant switches.

Normally when putting together a solution I would include a pair of Cisco 3750G switches so that we can stack them together (basically forming a chassis type switch with dual SUP engines and two 24 port Gigabit line cards). With the Cisco solution we can form ether-channels from server or SAN node into the switch and all links are active. (See picture also the switches are stacked with the Cisco back plane cable but that is not shown in the picture)The downside to this approach is that ether-channel will only load balance when its presented with multiple source and destinations. So if you only have 1 ESXi server and one storage node ether-channel would not help in utilizing all of the links. The other downside is that Cisco 3750G switches are $6995 list price. Now times that by 2 and even with discounts, that is a pretty good amount of money for some of the SMB’s that I work with.

With HP 2910AL switches there is really no way to stack the switches so that each server and SAN node can have one link to each without causing a loop. So with the HP switch solution we were forced to do an Active/Standby config with the NICs. The problem with this become clear pretty fast… we only get half the bandwidth from the SANs and Servers into the switching fabric. (See Picture, Red X’s represent links in a standby state)
So logically we need to find a way to get the most performance while utilizing the HP 2910al switches if the customer so desires. Although the same setup would work with the Cisco switches as well…but with this solution we can use whatever switches the customer prefers.

Enter Multipathing.

HP has published a PDF (HP Lefthand P4000 SAN) on how to setup their P4000 series SAN with VMWare and some best practices to follow. Starting on page 6 they describe how to setup multipathing with their SANs. First a little on the P4000 architecture. The P4000 is a multi-node solution that utilizes basic x86 servers with RAID controllers, SAS or SATA hard disks, and gigabit (or 10gig) network cards. With their SAN/iQ software this turns these storage nodes into what appears to be a single SAN, capable of “Network RAID” where your data can be put on multiple nodes for redundancy. Normally SAN traffic is sent to one of the nodes which acts as the gateway node and then the other SAN nodes can answer that request if that particular piece of data is on them. This solution works pretty well for most SMB customers, but with multipathing you can utilize these nodes even more efficiently. Without storage multipathing VMWare cannot see each node, it just sees the gateway node, with storage multipathing though, VMWare can then create a connection to multiple nodes which allows it to leverage multiple gigabit connections without the use of ether-channel.

Setup literally took less then 5 minutes, the longest part was setting up the HP VSA’s and waiting for ESXi to load into a VM. The HP PDF does a great job of describing how to do this, but I found that after adding my other vmkernel nic’s to the iSCSI initiator it is best to reboot the ESXi server. When I set this up at the office on my VMWare lab hardware it works just as advertised as well. I was even able to get VMWare to multipath to my Iomega IX2-200…however since it doesn’t have multiple network cards it didn’t really give my any higher throughput or redundancy… but fun non the less.

One of the other things I noticed is that because it’s using Round Robin Multipathing the load is balanced even if you are not pushing 100MBps through the first interface. Most of the time my lab gear was only pushing about 10MBps on each interface, but it was still balancing that load on the two NICs.

Conclusion:
Because I know now what I only thought was for big enterprise last week, I have a lot more flexibility in my solution designs. I can also adhere to budget requirements more effectively or even utilize existing customer infrastructure if possible. After seeing this work I feel much more comfortable using multipathing on the VMWare side and ALB (adaptive load balancing) on the HP Lefthand side. Below are more screenshots of my lab gear doing multipathing. Also another great guide for iSCSI and multipathing is located here from VMWare.

The first two pictures in the gallery are just the visio drawings above, and the rest are screenshots of the P4000 and VMWare multipathing.

Loading

Share This Post

4 Responses to "Storage Multipathing with HP P4000 and IX2"

  1. I prefer multipathing…. But as far as I know its “ok” either way. With Multipathing there are fewer options and less chance of an error. With etherchannel you really need to have a good understanding of Cisco devices and know how to check the status of the port channel and stuff.

  2. Ok. you stated “Without storage multipathing VMWare cannot see each node, it just sees the gateway node, with storage multipathing though, VMWare can then create a connection to multiple nodes which allows it to leverage multiple gigabit connections without the use of ether-channel.”

    I have the same setup however, Vmware is only connecting to the gateway node with multiple connections. How do I get Vmware to connect to all of my storage nodes? All 8 of them. Tks.

Post Comment