HP P2000/MSA G3: A First look for SMB’s

Related Articles include: HP P2000 (MSA2000i) Review and Running vSphere 4 on HP P2000

A while back I wrote a review of the HP P2000 G2 (aka MSA2000 G2) and it has generated a decent amount of page views, and it was very easy to work with. So when I heard that there was a 3rd generation P2000 out I figured I would check it out and see what has changed. I will also explain why you need to buy one and get away from local storage in your SMB ESXi servers. before we get into the details I should note that if you have an MSA2000 G1 or G2 you can upgrade it to the G3 controllers and get the new features like SAS 600, remote snapshots, 8Gb FC, 10Gb iSCSI, etc.

What’s New in the P2000 G3 MSA

  • Two-port 10 GbE iSCSI controller or
  • Four-port 6 Gb SAS controller or
  • Two new 8 Gb Fibre Channel controllers:
    • Standard model with two 8 Gb FC host ports each
    • Combo model with two 8 Gb FC host ports and two 1 GbE iSCSI ports each
  • Controllers come with 2 GB cache memory each
  • Increased support to seven P2000 LFF disk enclosures (96 LFF drives)
  • Increased support to five D2700 SFF disk enclosures (149 SFF drives)
  • 6 Gb SAS back end and HDD support
  • 64 Snaps and clone capability come standard on G3 models
  • Optional 512 snapshots max (double the MSA2000 G2)
  • Optional controller-based replication (Remote Snap) with the FC or the FC/iSCSI Combo controllers only
  • 512 Max LUN support
  • Higher performance with an upgraded controller with increased I/O performance
  • Improved System Management Utility (SMU) user interface
  • Full support for G1/G2 to G3 upgrade, including cross-protocol upgrades

(Information from the HP Spec sheets)

Why the P2000?

The P2000 array is at the bottom of the food chain (SAN food chain) in the HP portfolio, but don’t rule it out. With features like 8Gb FC and 10Gb iSCSI coming into view, these are not just for 10 user networks. (not that they were before The P2000 also brings much more to the table then you can get with the P4000 VSA software or local storage. Plus if you have followed my blog for a while you may have read another post I did titled “Recipe for SMB Clusters” this post describes how a small business can move from a single VMware ESXi server to a full HA cluster without breaking the bank. The key product that makes this possible is the P2000 iSCSI SAN.

HP P2000 compared to the HP P4000 VSA

I have designed and deployed solutions based on the HP P4000 VSA software, and it does a great job for a true SMB (when I say SMB I mean less than 50 users). However, it has some shortcomings which make the P2000 a better fit in many cases. As you probably already know the P2000 is a drive shelf with one or two controllers. The drives in the shelf can be accessed by either controller for high availability. In layman’s terms: One box with as much redundancy as possible. The VSA is completely different; instead of redundant controllers, you have multiple nodes. The upside of this is still a fully redundant SAN, but the downside is that we need drives for each node and software licenses for each node. In layman’s terms: Buy 2 of everything and form them into a redundant cluster. Let’s do a quick cost comparison for a SAN that will have approximately 1.8TB of raw space.

Cost for:
Part HP VSA (2 node cluster) HP P2000 (SFF iSCSI)
Chassis / Node License $8780 ( 2 x 4390) $8950
300 GB Drives $5400 ( 12 x 450) $2700 (6 x 450)
Totals $14180 ($7.88/GB) $ 11650 ($6.47/GB)

So it looks like you would be saving about $2500 dollars for 6 drive system. Scalability is the next limitation because the VSA is piggybacking off of a normal server you are going to be limited to at most 16 SFF drives per server. The P2000 chassis in this comparison will hold 24 drives before an expansion shelf is needed. As you can probably tell the cost difference is a fairly linear increase as your storage requirements grow. Also if you like backups to disk (for staging or D2D2T scenarios) you will find that the P2000 can have SAS and SATA drives in the same shelf. You cannot mix drive types in the VSA, and storing backup data on SAS drives is pretty expensive. I wont go into much detail on what you need to do to actually present 5TB through the VSA software, but you would need to add about 9 – 600GB SAS drives… to each node (since we started with SAS drives in our VSA), so 18 drives total (9 for each node). If we had bought the P2000 we can simply add 6 – 1TB SATA drives, present it to our Veeam Backup Virtual machine (innocent plug for vPower) and be backing up in no time.

One other thing to consider is that because you’re making your VMware servers do storage tasks on top of normal hypervisor activities, you are giving up resources from your virtual machine pool. So as you add more ESX servers and back-end them to your VSA SAN, the nodes that are running the VSA will require more resources for storage serving and have less available for virtual machines. With the P2000 you could just hook up the host and go without adding additional load on VMware servers.

Another note on the P4000 line (This section was added after initial posting)

Before I start I should note that we install more P4000 SAN’s than any other make or model, in a medium-sized environment (up to say 200 users) they work great and we don’t normally have any problems with them. However, there are some things on the P4000 that you just won’t get like you would with most other SAN’s. The biggest of which (at least I think anyhow) is two domain multipathing. Because of the way the P4000 does its networking you are not able to create an interface in Subnet A and another interface in Subnet B. If you could do this you could have two simple layer 2 switches and two NIC’s in each VMware server and have two full paths end to end. Unfortunately, the P4000 forces you to use the Linux ‘bond’ type driver so you are stuck with a single subnet for the entire SAN. This isn’t the end of the world, but in order to get decent amounts of bandwidth in and out of each node, you’re going to need to buy a pair of Cisco 3750X’s or something similar that have backplane stacking. If you try and use two “dumb” switches you will be forced to set up the NIC team in an active standby bond. (or something that doesn’t create a loop)

HP P2000 compared to Local Datastores

While I recommend local storage for one or two virtual machines, or for just starting out when the budget is a factor, I also advise moving to a SAN as quickly as possible. First off VMware now includes vMotion with all of its packages except for VMware Essentials, so if you plan to take advantage of that you will need shared storage. Also if you are purchasing local storage you are inevitably going to waste space on your ESX servers. There is almost no situation where you are going to use all the storage in the local box without needing more or having too much, and as your virtual machines grow you will either need to shut down the ESX server to grow the local storage pool or move the VM (and it will need to be shut down) to a different node which has more space. Performance is another issue with local storage. At some point, you will have one server that is overloading its local storage (IOps, throughput, etc), while another box is idle. If you had all the drives from both servers in a SAN, then your data would be spread across twice as many spindles as it is on local storage, therefore giving your VM’s better performance when they need it and not just letting drives sit idle while another VM needs more IO’s. The final hit against local storage, in my opinion, is the fact that you will need to uplift your hardware warranty on the servers that are hosting your VSA or local storage to something like a 6-hour call to repair warranty if you want to have minimal downtime. This is because with the normal 3-year hardware warranty parts are not guaranteed to be local, and may take 24 hours or more to arrive. If you have several servers and 6-hour call to repair warranties are 1200-1500 per server, you can quickly offset the cost of the P2000 and an uplifted warranty on it.

VMware documentation?

While I was out perusing for information on this new model one thing I never ran across was VMware specific documentation. This disappointed me because there are PDF setup and best practice guides for things like SAP, SQL, Exchange, and other platforms… but nothing VMware related. I am hoping to correct this problem after I get a chance to put one of the new G3′s through its paces. Hopefully I will have enough time to write a full config and best practices guide and answer some of the questions that I have which I have listed below.

  1. Can I connect my ESXi hosts via both FC and iSCSI at the same time. Not necessarily to the same LUN, but at least for tier 1 LUN’s over FC and tier 2 over iSCSI.
  2. Can I connect ESXi hosts via Fiber channel and connect a Veeam backup server via iSCSI and have it backup VMDK’s inside of the LUN’s presented over Fiber Channel to the Production ESXi hosts
  3. Now that the P2000 G3 has remote replication abilities will it work with SRM
  4. What is the best path algorithm to use with it Round Robin, MRU, etc etc
  5. Will it support the VMware VAAI
  6. Whatever else I can think of during testing


Share This Post

41 Responses to "HP P2000/MSA G3: A First look for SMB’s"

  1. Very interesting. I’ve been searching for as much information as I can find about VMware, VEEAM and the P2000 and thank GOOGLE I got to your blog. I am about a month away from migrating our infrastructure from physical servers with local storage to virtual with centrallized storage. I’m being presented the P2000 as the storage array we should buy and our software vendor suggests setting up Fault Tolerant VM’s. Since backup software cannot take image based backups of FT VMs I’m looking at Storage Snapshot options. Would you have an opinion or best practice reference for backing up FT VMs?

  2. Thanks for the comment Chris.

    I just got a demo P2000 installed in my lab rack today. Im not sure why you wouldn’t be able to use Veeam on a FT virtual machine. But i tell you what. Give me 48 hours and i will test it out and let you know if it really works or not. the problem that you will have by doing san based snapshots on virtual machines is that they will not be consistent. Because the SAN has no way to alert all the VM’s on a particular lun that its being snapped. so your backups would be as good as yanking a power cord out of a server. Anyhow I will setup a lab of your situation tonight with a fault tolerant VM on the P2000 and back it up with Veeam to see what happens.

  3. Wow… That was fast.

    I was reading the following KB and then spoke with an engineer at VEEAM today who confirmed that no backup S/W currently takes image backups of FT VMs because VMware doesn’t support snapshotting them. I’m trying to find an alternative to loading guest O/S BU agents and the storage snapshot sounded promising. Maybe not though. What do you mean the SAN snapshots will not be consistent? Does that mean SAN snapshots are only good for static files? How about DB files? Thanks.


  4. I just tried it out and (totally forgot) that you cant snapshot FT vm’s and its a limitation of vmware.

    So when the SAN takes a san shot the first thing it will do is pause IO. well normally you have a VSS plugin that will tell windows “Hey tell all your VSS compliant apps we are gonna pause a minute” then they do (the plush their data to the disk and get write confirmations) then the snapshot is taken, then the san sends an all clear to windows and IO is resumed. If you were presenting LUN’s directly to a windows box you could get away with this, because you would simply install the VSS tools from the SAN into windows and because the san knows what IQN’s are mounting its LUNs it knows who to send the pause message to.

    With a virtual machine, the LUN is mounted by VMware, so its the VMware software (or hardware) IQN that the SAN see’s, when you do a snapshot the san will send that pause message to the esx/esxi box, but vmware is like WTF… i dont understand that. and the san says ok welp im doing it anyway. so your servers have no idea what happened. So when you restore that snapshot and boot up your VM… the VM doesnt see a proper shutdown and unmounting of its NTFS so it thinks that it was just unplugged from power. This is where you might get lucky and you might look for a new job.

    Because the VM might not have been trying to write anything when you took the snapshot… it if wasnt then your golden. But if it was (like a DB transaction) then you will have “unplugged” it right in the middle causing DB inconsistency and overall badness.

    Does that help???

  5. That was super clear. Centralized storage is still new to me so Thanks, I appreciate the break down. So, this must mean unless you’re using raw disk mapping all the VMs attached to a LUN, SAN shots should never be taken without shutting VMs down first… would that be fair to say? Does this also mean that without VAAI compliance all the benefits of SAN shots are lost in a VM environment?

  6. Actually even an RDM wont do… VMware is still the initiator in that case. The only way to make SAN snapshots worthwhile on a VM in the situation your talking about would be to use microsoft iscsi initiator to mount the LUN.

    As far as VAAI, i think your confusing that with VADP.. Vmware API’s for Data protection. But VADP relies on snapshots for its use 🙁

    VAAI is for offloading storage related tasks to the Array (like zeroing out a vmdk… with VAAI vmware says “hey SAN, write 0 to the disk 10000 times, then tell me when your done” where as without VAAI ESX has to say “write 0, write 0, write 0 ……” etc etc.

    i would say you need to look into backup exec. It will be able to utilize vStorage APIs for all your non FT VM’s and it can be used the “old school” way to backup your FT VM’s, plus it has data dedupe and is pretty bad ass.

  7. We’ve been doing this exact setup since the original HP MSA 2000 G1. I couldn’t agree more with you. Its the ultimate SMB infrastructure.. and once in place its quite flexible and extremely reliable. I’ve now deployed every single make/model of the MSA/P2000’s and all have been rock solid. We typically go with the MSA2000sa (SAS) models as it really keeps the infrastructure investment low, lowers the cost of implementation and complexity and minimizes the space requirements.

    Our clients are all centralized in extremely high rent areas and every square foot dedicated to a server room costs a significant amount.

    My approach has been the MSA2000sa , dual SAS HBA’s, three R710’s loaded with memory, and a pair of low cost sata drives in a raid1. Purchase the VMWare Essentials Plus bundle (which now includes vmotion like you mentioned), depending on the client/needs etc I might get a little R410/etc and run VCenter/Backup on that.. otherwise I’ll virtualize the VCenter/Backup server.

    Done about 20 of these in the last 3 years and i’ve had 0 unplanned downtime. Even during migrations from ESX3.5 to 4, and then 4.1.. thanks to vmotion. 🙂

  8. Hi Justin. Great review.

    I am currently doing a P2000 G3/P4000 VSA comparison and pretty much came to the same conclusion (i.e. VSA not really worthwhile).

    However, you state that you install mostly P4000 SANs and I am curious to know if you have compared the P4000 to the EVA 4400. The EVA 4400 Starter Kits provide Fibre Channel connectivity for not much more than the P4000 and provide much better scalability.

    HP are really pushing the P4000 (LeftHand) gear at the moment, but I’m strugling to find a place for it, unless you want low-cost replication. Even then I’d probably favour a VM-level replication product.

    Your thoughts?

  9. Thanks for the comment Domenic,

    The VAR that I work for was a Lefthand Network’s partner back in the early days. Our sales guys got familiar with the Lefthand products and many of our customers were after lower end SAN’s to pair with Xenserver or standalone windows servers and be able to replicate their data to another site. (Either in the same building or campus)

    This made the P4000 (or NSM at the time) a great choice. As for the EVA4400 compared to the P4000. Most of our customers are the type who do not have an IT team… they have us…. or they have just one guy. Fiber Channel introduces yet another thing that they have to understand and invest in. Most of the time this is more then they want to invest in (time and money). So there again the P4000 was the winner.

    Keep us updated on what your choice is, and if you want to write a comparison of the EVA4400 versus the P4000 let me know, I would be more then happy to let you do a guest post or something. I just don’t have enough experience with the EVA line (only have one customer with one that I actively work on) to write a review on them.

  10. You’ve actually motivated me to start my own blog. See http://domenicalvaro.blogspot.com/2010/12/hp-storageworks-p4000-g2-san-where-does.html.

    Seriously though, if you haven’t looked at the EVA you really should. Fibre Channel switch configuration isn’t too difficult to pick up for small setups and it really is a great product, especially when you understand how its vRaid is superior to traditional RAID.

    For first-time deployments the EVA 4400 Starter Kit includes FC switches, HBAs and storage for an excellent price, when you consider what you’re getting. You’ll never install a P4000 again!

  11. The price comparison against the LFN VSA isn’t exactly fair. I think it’s somewhat apples to oranges. THe VSA is a cost for those who already own spare storage HW and/or virtual servers. It seems more appropriate to quote against one of the HP/LFN HW solutions instead of a VSA, possibly a starter pack.

    We began using LFN HW nodes several years ago, then did alot of P2V leaving spare servers which later became repurposed for VSA’s that do replication with the LFN HW ‘appliances’.

    I don’t have any experience with the P2000. Does it do replication? A very nice feature worth highlighting for those with an entry level SAN and later want to grow or add DR functionality.

  12. Thanks for the comment Travis. I do understand what your saying, and agree that you wouldn’t need to include drives in the pricing provided you have hardware already. My review was directed toward new installs, as most SMB’s that I work with do not have any single servers with enough drives space to re-purpose as a SAN… at least not for storing several virtual machines.

    The P2000 will do replication, however it is async, depending on how your doing replication with the lefthand it may or may not be similar. As far as scalability the P2000 will scale way beyond what a true SMB will need… if there is a possibility of needing more room or speed them what it can provided we would initially recommend stepping up to something like an EVA or Clariion… where you could buy the head end and then drives as needed.

    In an SMB environment it’s hard to go back a year later and ask for 15k to add another 3.6TB SAS node to a lefthand cluster…. Mr CFO usually has issues with stuff like that 🙂

  13. Our company purchased 12 of the MSA P2000 G2 arrays, and have had repeated G2 controller failures that results in total LUN corruption. We even had the HP Storage Tiger team onsite trying to determine what was happening with these arrays when we had a complete failure of both G2 controllers on a singe array that resulted in the distruction of all data on the array. HP replaced the controllers in 6 of the arrays with G3 controllers, and this appears to have fixed those arrays, but we continue to have failures on the remaing G2’s. We have actually removed the remaining G2 disk shelves from our environment and are trying to work with HP to replace them with G3’s but they are not to receptive to the idea. All these arrays started failing within months of being installed and we have applied all fixes provided by HP.

    All I can say, is if your looking at this type of storage, you may want to look at the IBM V7000. It’s about ths same price point, faster, larger capacity, and has storage vertulization, Easy tier, and thin provisioning built in. And no I don’t work for IBM, I am just frustrated with weak HP product, and continued bad HP support.


  14. Hi,

    i’m looking for information on the P2000G3 box.
    Great article – but maybe you know this bit also. Who’s the actial manufacturer of the P2000G3 box/controllerst? Is it actually HP or is it just HP branded?

  15. Hi Justin, just to clarify I think the term is called “offhost backup”. From what I gather the p2000 has a hardware vss provider that leverages the SAN snapshot technology to allow the backup host to mount a snap of all the Luns on the P2000 then back them up. In this case this would not be for VMware but for 2 SQL hosts to the P2000. So any experience with the add-on hardware provider available as a download from HP is appreciated.

  16. Hey Justin,

    Thanks for the info on the P2000 G3. Im looking for info regarding VMWare SRM 5 support with the FC version?

    I note remote snap is now available as a licensed add-on but cannot find any compatibility information.

    I have searched HP SPOCK but given the information on the site cannot see SRM 5 listed against the P2000 G3.

    Do you have any info regarding the setup?

    Does any one know (preferably piloted) if this is a supported configuration?

  17. Hey Noel,

    While I do not believe that there is an sra plugin for the P2000, SRM 5 now has what is called VR… Or vSphere Replication. This allows you to replication Virtual machines at the hypervisor level while being completely storage agnostic. You could even replicate just your super critical VM’s from a cluster with a P2000 to a single massive server on the other side ( or maybe a couple servers clusters and utilizing local storage. Bottom line is that SRM 5 allows for ALOT more flexibility that previous versions. Shoot me an email at [email protected] and we can talk more.

  18. I have a question that is not real clear in the literature. I know you can’t mix 2 different protocol controllers, other than the dual controllers them selves like the FC/iscsi controller. But what about 10GB iscsi and 1 GB iSCSI?

  19. Nothing but trouble….

    We have a few msa2000’s and we have had nothing but trouble with them. Controller failures, driver failures, and no redundancy. Support is absolutely horrible, everytime we have trouble we have to send in the logs, wait a day or two for them to analyse them, then we run manual checks, which takes two days, then send in the logs, and this goes on for weeks. All for them to say…uh you have another dead drive. Would never buy them again.

    We also have a P4500 dual node SAN and we are using almost all of the 10 VSA licenses that came with it and I am 100% impressed. The software updates are so much easier, the support is amazing, had one problem/question and within 45 minutes it was solved and I was on the the next thing. The console for management is one of the best I have seen. I would never go back to an MSA device even if it was 5 times cheaper, just not worth all of the hassle and downtime.

  20. I will say that the P4500 is a good box, and the P2000 seems to have mixed reviews. We have sold about 25 of them over the last year and have had no problems with them. Are you running P2000 G3 controllers or older ones?

    One thing I will say that I do not like about the lefthand software is the Network RAID5…. I would NEVER! recommend using that. It utilizes snapshots to calculate parity and if your doing local or remote snapshots on top of network raid 5 i have seen systems go into a state where they will take days to get all synced back up.

  21. We are using the G2 models. I think the one SAN we have had to replace about 3/4 of the drives already, the complete motherboard and the controller, they even sent us a replacement steel chassis. Even if we got the lemon’s of the bunch, that would be fine, but the support is so slow and in our case the vdisk(s) usually have some sort of corruption so its down for days or weeks because we have run all of these tests first, and then we have to rebuild or restore everything. This is never a critical thing for us, but it just wastes so much time.

    I have not used remote snapshots on our VSA raid 5 setups. We use that san mostly for vmware vdr backups and low end servers, so I don’t think I will ever do snapshots on them. as they are already filled with snapshots of servers….no point in my case. But thanks for the heads up on that.

  22. Hi Justin,
    I walked your same path from P4000 to P2000 and I agree with your analysis. We have a P2000 in each remote branch, with around 50 users (and from 3 to 10 VM) in each of them. Ease of setup, use, and upgrade, we are now planning to get one 10GB iSCSI in the HQ too, to replace our old EVA4000.
    One question regarding remote snaps and backups: we plan to bring data from remote branches to the HQ using the Remote Snap feature of the P2000, and make centralized backups here. Our infrastrucure is almost totally virtual, so we will probably implement a tool like Veeam for backups. Do you have any experience of how remote volumes with remote VMs can be efficiently backed up with these tools, making use of the Changed Block Tracking tools in VMWare? I think that this will involve a local mount of the remote volume, but I’m just guessing here….

  23. Lets talk offline about this, we can post the results, but lets draw it out and let me show you the way i would do it first. Shoot me an email at [email protected] with the number of sites and the types of hardware at each site (just a general overview… 2 VM servers 1 san, 10 vms, 50GB total data) and what types of links you have between sites and HQ. I will put together a Visio of exactly how i would do it then.

  24. I have 80 concurrent users running JDE application and plan to use HP P2000 storage and compare with DELL PS6100. I thik it different class, how can I know what’s suitable for us and what’s the trend of SAS disk, 300 GB, 600 GB, 146 GB with 2.5″ or 3.5″?

  25. My company has recently setup a network but the system integrator does not seem have well design for us and causing slow backup. I hope someone in here will able to assist me.

    1. 1x MSAP2000 G3 for our SAN Storage – Use for ESXi DataStore for VMDK file
    2. 2x HP DL360 G7 for ESXi 5.1
    3.. 2x HP StoreEver MSL 2024 Tape Libraries with 1x LTO5 drive each library
    4. 1x DL320 G8 for Windows 2012 with Symantec Backup Exec 2012 V-Ray Edition
    5. VMWare vSphere 5.1 Standard, VMWare VCenter 5.1 Update1 Standard (on virtual machine)

    The SAN is connecting via iSCSI 1Gb port via CAT6 cable to switch and both ESXi Host are accessing the SAN via 1Gb network.
    On SAN have 8 iSCSI NIC port and are in used for different vLAN. The Backup uplink is only 1 Gbps to the Backup Server.

    Backup Server is connected to 2x Tape library via 6Gb SAS cable.

    Obviously the problem backup data using 1Gb port is slow.
    Q1. Can I attached the 6Gb SAS cable to the SAN and Windows Backup Server so it can directly backup via SAS cable?
    Q2. I heard by attach the SAS cable from VMFS (SAN) to Windows (NTFS) will corrupt the VMFS system once the machine is mount. is this truth and any way to prevent?

  26. If the P2000 is the iSCSI model then the only way to present storage to servers is via iSCSI. the SAS ports are for backend bus’s only (IE expansion shelves)

    There are really too many variables to list in this scenario…
    So to answer your questions….
    1.) No
    2.) if you were to hook the can via sas to a server you would probably crash the san and the server as they are not meant to work that way…. but in general if you were to connect a VMFS volume to a windows box it would corrupt it IF you mount it… but if you turn off automount in windows this will allow the backup program to backup directly from the SAN over iSCSI to the tape drive…. provided you configure backup exec properly.

    My advise would be to find a new reseller if they are not familiar with the technology

  27. We have 1 MSA p2000 Fc for Main Storage (5 host & 60 VM). We bought other p2000 fc and with Veeam B&R 7 How i can Replicate between this 2 p2000 (in storage mode, not vm)?

  28. You can either purchase the P2000 replication license for each SAN, or setup a vcenter server in front of each physical san and then use Veeam to replicate between the two at a VM level.

Post Comment