Recently I was given the chance to install an HP P2000 / MSA2000i iSCSI SAN for a customer. This blog post details setup best practices for using the MSA2000i (MSA2324i technically) with VMware vSphere 4.1. But first an overview of the environment: 1 – DL360 G6 Server with two more servers to come, all with dual Xeon E5540 procs, 24 GB of ram, and eight 1Gbps NIC’s. The goal of the project was to virtualize the company’s servers as well as provide better RTO and RPO for existing servers and a new ERP system. Besides the new hardware we are also leveraging a Cisco 3560G Switch for the network side. This is a temporary solution, as this single switch will be replaced with 2 – Cisco 3750G’s, and will be configured in a stack to eliminate the single point of failure which the 3560G presents.
In order to do multipathing properly we will need two SAN subnets for iSCSI data traffic. Inside of each subnet we will have two SAN ports and 1 port from each ESXi server. We need to set it up this way so that there are multiple source and destination addresses to maximize the number of paths between SAN and server, which in turn will allow for redundancy as well as the ability to leverage multiple gigabit links in tandem. I wont go into much detail for the servers, but would recommend using the HP P4000 guide, which also describes multipathing to the P4000. Ignore the parts about the SAN in this case and just focus on the setup of VMware ESXi iSCSI initiator setup. When you get done with setting up your servers according to that guide you should have something like what is in the following pictures:
The first screen shot shows the vSwitch configuration – two physical NICs and two vmKernel ports. The second screen shot shows one of the vmKernel ports, and how one of the NIC’s must be placed in unused mode, and the other in active mode. The setup for the other vmKernel port is just the opposite of this one (so vmnic6 is active while vmnic7 is unused). This basically makes an iSCSI port through a physical NIC, so that each nic has its own IP address. There is also some configration that must be done to map these vmKernel ports to the iSCSI initiator… see the HP guide for that.
Now the new part: Hp MSA2000i SAN.
Configuration of this SAN can be done completely from a web browser, making it very simple for the SMB market to work with. It comes with a CD that will help discover it the first time you plug it in, however as long as you have DHCP turned on it will go out and get an IP and then you just need to look through your DHCP leases to see which ones just get handed out. After logging into the interface you are presented with an easy to understand interface:
Basically this interface shows you your Vdisks (think of these as RAID groups) and under those are your actual volumes that get presented to your initiators. From the main page you can manage the configuration of the MSA controllers but clicking the configration tab and going to System settings -> Network Interfaces. This is where we will set our static IP addresses for each controller. These IP addresses are to be allocated from a management VLAN, or if you dont use a management VLAN then use your normal data VLAN. This MSA has redundant controllers so we will need to addresses, one for each controller.
Before I get into any more configration of the SAN I think it would be good to know exactly how the back of the MSA looks so that its easier to understand. I have also used my amazing GIMP skills to add some boxes with the number 1 and 2 in them. These are the subnets that the ports belong to, so the two ports in group 1 are in teh first SAN subnet, and the two in group 2 are in the second subnet.
So what the “Configure Network Interfaces” page did was to configure the ports on the back that are just to the right of the green box, these are the management interfaces. Now for the iSCSI interfaces, to configure these you goto the same configuration menu, and drill down into system, and then pick “Configure Host Interfaces”… yeah its a little confusing at first… I would have probably labeled them “Configure ISCSI Target Interfaces” or something like that.. oh well. Once you click that you are given a screen with four sets of IP addresses. What we need to do here is ensure that each controller has one port in each subnet, so (for simplicity) ports A1 and B1 are a subnet and A2 and B2 are in the other. Here is what it looks like when its completed.
We used 192.168.20.0/24 and 192.168.21.0/24 as our two subnets for iSCSI. I did not configure a gateway on these ports because there should not be any reason for these ports to access anything outside of our iSCSI VLAN. Once that is all setup and your volumes have been mapped to a controller you should now be able to enter the MSA’s host interfaces into the ESXi Software initiator and rescanthe HBA. Make sure to enter ALL of the MSA’s IP addresses, unlike the lefthand where you just need one, you will need multiple addresses for the multipathing to work on the MSA.
After rescanning the HBA we should have a bunch of paths to our volumes.
Even after configuring Round robin multipathing not all of the paths will be “Active (I/O)” this is because of how the MSA operates. Basically when you create a volume you map that volume to a controller, and that is the active controller that services I/O to the volume. For fail-over purposes the other controller assumes a passive state for that volume, if the active controller fails then the passive controller assumes control of I/O for the other volumes. What is nice about the MSA2000, along with other storage solutions, is that volumes can be mapped to either one of the controllers for its primary I/O. This allows the MSA to take advantage of CPU and network resources in both controllers during normal operation, which maximizes the performance of the SAN. Obviously if a controller fails then throughput maybe a little slower while the other controller is offline.
Overall setup of the MSA took maybe 2 hours from the time it was pulled out of the box until volumes were ready to be presented to the VMware cluster. Array initialization did take much longer for the RAID5 and RAID10 volumes that I configured. The price point of the MSA2000i series is very reasonable for the SMB market, the 12 drive LFF model with dual controllers has an MSRP of $7800, while the 24 drive SFF model with dual controllers is $8100. The only thing that this does not include is drives. Obviously this is where you can make the price very high or stay reasonable. However you can mix and match drives inside of each chassis. If you have database servers or MS Exchange you can put some SAS drives in and create a high end Datastore, while for file servers you could add in a few terabytes of SATA drives for higher storage capacity. Plus it can grow with your business by simply adding an MSA70 shelf to the mix. on the SFF models you can add 3 more shelfs for a total of 99 drives, and on the LFF models you can add 4 more shelves for a total of 60 drives.
Combined an MSA2000i with the VMWare Essentials Plus package, the Veeam Essentials for VMware bundle, a Cisco switch, and three HP dual six-core servers with a bunch of RAM and you will have no problem running almost any SMB workload at a very reasonable price.
Shopping list for MSA2000i SAN solution used in this article:
- HP MSA2324i SAN with Dual Controllers and 3 Year Warranty – $8,100
- 12 x 146GB 15k RPM Hard Drives – $5268