A while back I wrote a review of the HP P2000 G2 (aka MSA2000 G2) and it has generated a decent amount of page views, and it was very easy to work with. So when I heard that there was a 3rd generation P2000 out I figured I would check it out and see what has changed. I will also explain why you need to buy one and get away from local storage in your SMB ESXi servers. before we get into the details I should note that if you have an MSA2000 G1 or G2 you can upgrade it to the G3 controllers and get the new features like SAS 600, remote snapshots, 8Gb FC, 10Gb iSCSI, etc.
What’s New in the P2000 G3 MSA
- Two-port 10 GbE iSCSI controller or
- Four-port 6 Gb SAS controller or
- Two new 8 Gb Fibre Channel controllers:
- Standard model with two 8 Gb FC host ports each
- Combo model with two 8 Gb FC host ports and two 1 GbE iSCSI ports each
- Controllers come with 2 GB cache memory each
- Increased support to seven P2000 LFF disk enclosures (96 LFF drives)
- Increased support to five D2700 SFF disk enclosures (149 SFF drives)
- 6 Gb SAS back end and HDD support
- 64 Snaps and clone capability come standard on G3 models
- Optional 512 snapshots max (double the MSA2000 G2)
- Optional controller-based replication (Remote Snap) with the FC or the FC/iSCSI Combo controllers only
- 512 Max LUN support
- Higher performance with an upgraded controller with increased I/O performance
- Improved System Management Utility (SMU) user interface
- Full support for G1/G2 to G3 upgrade, including cross-protocol upgrades
(Information from the HP Spec sheets)
Why the P2000?
The P2000 array is at the bottom of the food chain (SAN food chain) in the HP portfolio, but don’t rule it out. With features like 8Gb FC and 10Gb iSCSI coming into view, these are not just for 10 user networks. (not that they were before The P2000 also brings much more to the table then you can get with the P4000 VSA software or local storage. Plus if you have followed my blog for a while you may have read another post I did titled “Recipe for SMB Clusters” this post describes how a small business can move from a single VMware ESXi server to a full HA cluster without breaking the bank. The key product that makes this possible is the P2000 iSCSI SAN.
HP P2000 compared to the HP P4000 VSA
I have designed and deployed solutions based on the HP P4000 VSA software, and it does a great job for a true SMB (when I say SMB I mean less than 50 users). However, it has some shortcomings which make the P2000 a better fit in many cases. As you probably already know the P2000 is a drive shelf with one or two controllers. The drives in the shelf can be accessed by either controller for high availability. In layman’s terms: One box with as much redundancy as possible. The VSA is completely different; instead of redundant controllers, you have multiple nodes. The upside of this is still a fully redundant SAN, but the downside is that we need drives for each node and software licenses for each node. In layman’s terms: Buy 2 of everything and form them into a redundant cluster. Let’s do a quick cost comparison for a SAN that will have approximately 1.8TB of raw space.
|Part||HP VSA (2 node cluster)||HP P2000 (SFF iSCSI)|
|Chassis / Node License||$8780 ( 2 x 4390)||$8950|
|300 GB Drives||$5400 ( 12 x 450)||$2700 (6 x 450)|
|Totals||$14180 ($7.88/GB)||$ 11650 ($6.47/GB)|
So it looks like you would be saving about $2500 dollars for 6 drive system. Scalability is the next limitation because the VSA is piggybacking off of a normal server you are going to be limited to at most 16 SFF drives per server. The P2000 chassis in this comparison will hold 24 drives before an expansion shelf is needed. As you can probably tell the cost difference is a fairly linear increase as your storage requirements grow. Also if you like backups to disk (for staging or D2D2T scenarios) you will find that the P2000 can have SAS and SATA drives in the same shelf. You cannot mix drive types in the VSA, and storing backup data on SAS drives is pretty expensive. I wont go into much detail on what you need to do to actually present 5TB through the VSA software, but you would need to add about 9 – 600GB SAS drives… to each node (since we started with SAS drives in our VSA), so 18 drives total (9 for each node). If we had bought the P2000 we can simply add 6 – 1TB SATA drives, present it to our Veeam Backup Virtual machine (innocent plug for vPower) and be backing up in no time.
One other thing to consider is that because you’re making your VMware servers do storage tasks on top of normal hypervisor activities, you are giving up resources from your virtual machine pool. So as you add more ESX servers and back-end them to your VSA SAN, the nodes that are running the VSA will require more resources for storage serving and have less available for virtual machines. With the P2000 you could just hook up the host and go without adding additional load on VMware servers.
Another note on the P4000 line (This section was added after initial posting)
Before I start I should note that we install more P4000 SAN’s than any other make or model, in a medium-sized environment (up to say 200 users) they work great and we don’t normally have any problems with them. However, there are some things on the P4000 that you just won’t get like you would with most other SAN’s. The biggest of which (at least I think anyhow) is two domain multipathing. Because of the way the P4000 does its networking you are not able to create an interface in Subnet A and another interface in Subnet B. If you could do this you could have two simple layer 2 switches and two NIC’s in each VMware server and have two full paths end to end. Unfortunately, the P4000 forces you to use the Linux ‘bond’ type driver so you are stuck with a single subnet for the entire SAN. This isn’t the end of the world, but in order to get decent amounts of bandwidth in and out of each node, you’re going to need to buy a pair of Cisco 3750X’s or something similar that have backplane stacking. If you try and use two “dumb” switches you will be forced to set up the NIC team in an active standby bond. (or something that doesn’t create a loop)
HP P2000 compared to Local Datastores
While I recommend local storage for one or two virtual machines, or for just starting out when the budget is a factor, I also advise moving to a SAN as quickly as possible. First off VMware now includes vMotion with all of its packages except for VMware Essentials, so if you plan to take advantage of that you will need shared storage. Also if you are purchasing local storage you are inevitably going to waste space on your ESX servers. There is almost no situation where you are going to use all the storage in the local box without needing more or having too much, and as your virtual machines grow you will either need to shut down the ESX server to grow the local storage pool or move the VM (and it will need to be shut down) to a different node which has more space. Performance is another issue with local storage. At some point, you will have one server that is overloading its local storage (IOps, throughput, etc), while another box is idle. If you had all the drives from both servers in a SAN, then your data would be spread across twice as many spindles as it is on local storage, therefore giving your VM’s better performance when they need it and not just letting drives sit idle while another VM needs more IO’s. The final hit against local storage, in my opinion, is the fact that you will need to uplift your hardware warranty on the servers that are hosting your VSA or local storage to something like a 6-hour call to repair warranty if you want to have minimal downtime. This is because with the normal 3-year hardware warranty parts are not guaranteed to be local, and may take 24 hours or more to arrive. If you have several servers and 6-hour call to repair warranties are 1200-1500 per server, you can quickly offset the cost of the P2000 and an uplifted warranty on it.
While I was out perusing for information on this new model one thing I never ran across was VMware specific documentation. This disappointed me because there are PDF setup and best practice guides for things like SAP, SQL, Exchange, and other platforms… but nothing VMware related. I am hoping to correct this problem after I get a chance to put one of the new G3′s through its paces. Hopefully I will have enough time to write a full config and best practices guide and answer some of the questions that I have which I have listed below.
- Can I connect my ESXi hosts via both FC and iSCSI at the same time. Not necessarily to the same LUN, but at least for tier 1 LUN’s over FC and tier 2 over iSCSI.
- Can I connect ESXi hosts via Fiber channel and connect a Veeam backup server via iSCSI and have it backup VMDK’s inside of the LUN’s presented over Fiber Channel to the Production ESXi hosts
- Now that the P2000 G3 has remote replication abilities will it work with SRM
- What is the best path algorithm to use with it Round Robin, MRU, etc etc
- Will it support the VMware VAAI
- Whatever else I can think of during testing