Running vSphere 4 on HP P2000

Related Articles include: HP P2000 (MSA2000i) Review and HP P2000/MSA G3: A First look for SMB’s

Because there is no document published on how to setup and run VMware off of an HP P2000 (at least that I could find), I have put one together after reviewing the P2000 FC/iSCSI combo unit that I was lent by HP (Thanks again guys!). I have created a PDF version of this for easy offline viewing, as well as prettier formatting. I tried to model this document after the “Running VMware vSphere 4 on HP Lefthand P4000 SAN solutions” document that HP produced after acquiring Lefthand. Therefore some parts of that document that are identical for the P2000 I simply copy/pasted into this one. Other parts have been created specifically for the P2000’s differences from the P4000.


Click here to download the PDF Updated to revision 2, which was a simple graphics change and added a change log to the end of the document.

I encourage everyone to leave a comment or suggestion. Ideally, if the guide is to be accurate it will require input from more then just one author.

Update 01-10-2011

If you are working with the SAS model check out this post over at He has created a great setup guide for the SAS model, which can also be applied to the other models too… just dont forget to configure your port settings if your using anything except SAS.



Share This Post

75 Responses to "Running vSphere 4 on HP P2000"

  1. You post made my day. Thanks.

    I am considering to buy entry level SAN. It will be used for non-critical VM’s and backup through VDR.
    I am considering P2000, AX4 and Ds3400/Ds3500.
    Please let me know if you have any experience with AX4 or DS3400.
    Also if you can share your experience with P2000 in regards to reliability, performance( in IOPS) and HP SAN Management software.
    Q: are the disks Hot swappable?

  2. I am actually putting together another post that highlights all of the advantages of the P2000, but since its not ready…

    Yes the P2000 supports hot swap drives.

    One thing you will give up with an AX4 is that you will need a Vault pack.. this is 4-5 drives depending on the model of the clariion. And a single clariion holds 12 (or 15 cant remember) drives… well you just gave up alot of capacity on 4-5 of those slots.

    with the P2000 the firmware lives on the controllers… not on the drives, therefore you could take ALL the drives out and it would still start up, or if you lost your raid array your still good to go. with a clariion if you would for some reason lose the vault array you will be reloading the san.

    as far as IOps… with iscsi (multipathed 1gbps) i was able to get several thousand iops out of 6 sas drives with the crystal disk mark tests. with Fiber Channel (4Gb… only HBA i had around) i was able to get some extra iops out of her…. but either way its pretty damn fast for 6 drives in a raid 5

    as for the DS series sans i cant speak on those as i have no experience with them.

    Personally I love both the EMC line and the HP P2000/4000 line, but if your doing lower priority VM’s and backup data i think you will find that the P2000 will more then fit the bill… plus im sure that HP SATA drives for your backup data will be cheaper then then EMC sata drives.

  3. What an great blog 😉

    Thank for all precious informations you give here.

    We are actually planning to refound our IT infrastructure to vMware based solution.

    I aim the HP P2000, but lot of questions come in my mind.

    Which technologies to use ? FC and 10Gbe is too much expansive for us, personally i find SAS a better solution compared to iSCSI :

    – meet our needs (in the first time, only 2 ESXI accessing the storage, P2000 has 8 SAS)
    – no needs of redundant switch
    – “plug and go” (iSCSI multi path configuration seems to be tricky)
    – better performances compared to iSCSI.

    Did i miss something or what could be wrong in this comparison ?

    An other point, let’s talk about backups and snapshots of VM (we plan to use Veeam), i saw HP D2600/2700 disk enclosure (which are less expansive) can be plugged on P2000, can they be used to store backups and snapshots produced by Veeam ?
    Can these backup be externalized one an other iSCSI device, or placed on an other vSphere storage pool located on a remote site ? (link is at 100Mbits/s)

  4. Thank you for your comment first off. As for your questions I agree, SAS is a much more cost effective way to go if you have a limited number of hosts.

    However, the 8Gb FC model of the SFF MSA is only $1000 dollars more (list price) then the SAS model. Where you will save the most money is on HBA’s as the SAS HBA for the MSA is $199 list price where as the 8Gb Fiber Channel HBA’s are about $1200 each.

    I think that the only thing you will give up by going the SAS route is the ability to integrate it into an existing FC network (which from the sounds of it you dont have), and also cabling is much smaller for fiber then all of the large SAS cables (although not a big deal). Other then that, I think the SAS would be a great option and give better preformance then iSCSI in most situations. One thing I will note is that you will need to put two HBA’s in each server and run two SAS cables to the MSA… one to the top controller and one to the bottom controller in order to survive a controller reboot or failure.

    As for backups, I recommend getting SATA drives as they are more cost effective. Veeam will store its backups anywhere it can talk to. Be it an iscsi Target, DAS, or linux server. If you were to add a shelf to the MSA i would recommend an MSA50…. which holds 3.5″ drives. Then you can take advantage of 1 or 2TB sata drives for higher capacity which will lead to longer retention at a lower cost.

    What you would most likely need to do is present the backup lun to the Veeam server via SAS or whatever medium you chose. After you to that just format it NTFS and point your backup jobs there.

    As far as sending data across to a remote site, you have two options. 1 is to create a replication job to replicate it to a VMware host that is at that remote site. Or have a linux server there that you send jobs to. Personally I create a replication job and point it at a stand alone esx server with local storage for an offsite backup.

  5. We’re thinking of purchasing a P2000 for our backup disk target. We use CommVault backup software.
    All backups get funneled through the MediaAgent (MA) Servers and then will get written to the P2000.
    We are planning on having 2 MA’s. Many of the clients will backup over the LAN, then the Media Servers will push it to the P2000. Some will come over FC to the MA’s to the P2000. Then, all backups will get written to tape.

  6. Thanks for the reply to my initial post.
    Can you please provide the specs of P2000 G3 Storage Controller,specially the CPU details.
    Congratulations on passing the VCAP.

  7. Hello Justin,

    You have done a great job posting information about P2000, in fact I followed to install my P2000 and 2 DL360G7 server with Vmware Essentials Plus.

    I got some questions about snapshots, since every time it is performed by the schedule, the name of the snapshot changes to another one so any mounts and lun assigments are lost.

    What I´m trying to do is to mount an snapshot or a volume copy of 3 raw volumes assigned to a vmmachine to a physical server where i have a MSL2024 for tape backup. but as I said before the mount is lost everytime the snapshot the schedule is executed. do you know a way to script a job to perform perform the snapshot, then mount the snapshot again to the physical server every day???

    or is there any way to configure an automatic mount of a scheduled snapshot??

    any input will be appreciated!!


  8. Hi, thanks for the great article, can someone confirm that the 6gb SAS version of P2000 actually supports vSphere/ HA and DRS?

  9. I know we spoke via Email, but i thought I would reply in case someone else has the same question:

    Q: Is the SAS SAN fully supported on VMware and support all VMware HA and DRS features?

    A: Yes, the SAS san is fully supported and also supports all VMware features.

  10. Hi, thanks for great posts and a great tutorial. I have a question regarding the MPIO setup for an P2000 G3 iSCSI-box using two NIC’s/per controller. I’ve been following the tutorial and also using the Open-E DSS tutorial for manually lowering the IOPS RR but been unsuccessful on getting good transfer speeds. Using esxtop I can see that both vmnic’s are being used but only to around 400Mb/s each, maxing out at around 80-90MB/s when testing from both hdparm and HD Tune. Both the vSwitch and vmk’s have been set to MTU 8950 (9000 had problems with connectivity).

    Any clues? The NICS in the servers are HP NC382T (2 x dedicated Dual-port Broadcom NetXtreme II).

  11. How is your switch configured ? what type of switch is it ? Is your MTU set to something like 9150 (or whatever the MAX is on it) if your SAN is set to 9000 and the vmk’s are only 8950 you will see fragmentation of the iscsi packets in some cases.

  12. Hi again, the switch is configured with MTU 9000 for the P2000 iSCSI-ports and 8950 for the ESXi. I can try lowering them for the P2000 but I cant manually set a value, just enable jumbo frames.

    I tried with only one port and that gave me around 800-850Mb/s so I doubt the MTU is the real issue?

  13. Tried everything today with no luck. Gonna try another switch (the ones we have today is Netgear GSM7248v2’s) and see if I can get the MTU to 9000.

  14. Justin, does anyone have a similar guide that would detail setting the HP Storageworks P2000 G3 MSA up with fiber channel instead of iSCSI in vSphere\vCenter 4? I have one and I have all my VMware stuff set up using local drives on the servers instead of shared storage on the MSA, but I don’t know how to get vCenter to talk to the fiber cards in the servers or get the cards to talk to the MSA or whatever. Any links or tips would be helpful. I’ve been using vCenter for over a year so I am familiar with it, I just don’t have any storage\SAN\fiber experience.

  15. Thanks for this great guide which helped me a lot in setting up my lab ! I have a problem though : i can’t achieve to have MPIO working : both paths for my volumes are set, but only 1 shows “Active (I/O)” – the other one being “Active”. If you have any idea…

  16. I’m using the iSCSI SAN (no FC) with vSphere’s software initiator. Path selection is set to RR. I’m wondering if it is because I’m using a software initiator (single iqn) to 2 static iSCSI targets (A1 and B1 ‘s SAN ports).

  17. did you setup two vmk interfaces with separate ip addresses and bind then to separate vmnics? if you dont do this part just right multipathing will not work properly. but using the software initiator should still do multipathing with a single iqn…. its all about the path setup, not the iqn

  18. Hi !

    Thanks for your reply. The setup is really close to the one described in the PDF. The only thing I didn’t set up is the CHAP authentication. Apart this, I have 2 SAN subnets, 2 vmnics for iSCSI, 2 vmks. I’m currently trying to reproduce this in another environment to figure out where the hell is… 😉

  19. Hi,

    I am going to be implementing a P2000 iSCSI SAN solution using HP ProLiant DL380 G7s as the ESXi hosts.

    Having read through your article, I have a few questions and comments…

    Firstly, HP has a best practices document for deploying vSphere 4.1 on the HP P2000 G3 MSA Array combo controller. This can be found at:
    This document, together with your recommendations, however does not cover the exact controller that I have in the P2000 and also leaves me with some questions (I have got the controller with four iSCSI ports).

    I hope that you can answer these questions so that I feel more comfortable with the reasoning behind my implementation…

    1. What are the benefits of configuring the iSCSI vSwitch and ports as described in your document as opposed to configuring an ether channel on the iSCSI switches and configuring the vSwitch to use the IP hash load balancing algorithm for the iSCSI network?

    2. I’m not sure I entirely understand the point of creating a single vSwitch with multiple uplinks for iSCSI and then setting each port group to only use a single uplink with the rest disabled… surely it would be easier and safer to create multiple vSwitches (one for each iSCSI network) with a single uplink in each. (Incidentally, your implementation echoes how the best practices document suggests to do it, I would just like to understand the reasoning behind this)

    3. According to the HP Support Document found here:

    The recommended SATP / PSP settings for ESX 4.1 differ depending on the exact array / controller that is being implemented. In my case, it is recommended that I use VMW_PSP_MRU (Most Recently Used) as opposed to Round Robin. It may be worthwhile updating your document to reflect the recommended settings listed on that website.

    4. What are the benefits of creating two iSCSI subnets as opposed to having a single iSCSI subnet? and further to this, in the case of the new controllers that have four iSCSI ports on them, would you recommend having four subnets (which in turn would require more NICs in the hosts given the recommended vSwitch configuration)?

    Thanks in advance

  20. Thanks for reading Martin, You raise some good questions in your comment, I will do my best to answer them.

    1.) The reason I like to configure the ports without etherchannel is because you then need to do switch configuration too. While that is not normally a problem, I like giving the ability to swap in a “dumb” switch that has no configuration on it just in case there switch fails. When your in a moment of panic and trying to get your infrastructure back up and running you could easily forget to reconfigure all of your etherchannel groups. Plus then you dont have to change any of the defaults on vmware with the ip hash load balancing…. moral of this point is … keep it single, the more complicated you make it the harder it is to fix.

    2.) You can actually do it either way, but again I like to keep it simple and simple to me is just one vSwitch for iSCSI. In either case you will still need to map your physical ports to a vmk device that you create. Because you are using DL380 G7’s with the Broadcom nics (the integrated ones onboard) you will want to do this anyhow because you can utilize the hardware broadcom initiator and offload all of the iscsi work to the network card. Once you map the vmk port to the vmnic that it needs to go with it will guarantee that the ports traffic only goes out that port. Overall it can be done either way… if you like it with two vSwitches then do it that way.

    3.) HP can be wrong 😉 I would recommend you set it up and test it. Round Robin has worked on all of the P2000 iSCSI boxes I have setup, and if you tweak the IOps Limit variable in ESX you will see ALOT better performance with Round Robin then you will with MRU. MRU will basically make each target pick a path and stick to it until it goes offline, with Round Robin you can utilize all of the links.

    4.) When I first wrote the article I ALWAYS used two subnets. Lately I have had to setup some of them with a single subnet. It mainly depends on how your physical infrastructure will look. I will assume that you will have dedicated infrastructure for storage, so lets look at your options:
    a.) if you have switches that “stack” with an Inter-switch Link and share a MAC table… like the Cisco 3750’s do, then you could use a single subnet… because all ports would be plugged into the same switch so no matter where they want to go they would be able to get there.
    b.) If you have two switches like a pair of Cisco 2960’s then you would not want to link them together, each switch would be standalone. Because of that you would want one subnet for each physical switch (you could do two subnets per physical, but one per switch is the minimum), the reason is because you need to have the on port on your ESX server go to one of the switches and one go to the other switch. Then on the SAN you would want at least one of its ports cabled to each of the two switches…. but since you have 4 ports just plug two into each of the switches and give each of them an IP address.

    I will put together an article on setting up the Hardware iSCSI on the HP servers… it used to be a real pain in the ass to do, but with vSphere 5 it has become very easy.

  21. Hi again Justin.

    Thank you for the quick response to my post.

    The isolated iSCSI network that I am installing consists of two Cisco 2960S switches which are stacked using the Cisco FlexStack modules so with this in mind I have made the following decisions:

    I am going to configure an etherchannel on the Cisco stack and have a single vSwitch configured with IP hash load balancing on each host. We have an automated system that backs up our switch configs every night so this will provide me with an up to date config in the event of a critical disaster. In the event that a single switch fails, the config is still stored on the remaining surviving switch and will automatically be copied to the new ‘dumb’ switch once installed.

    Whilst MRU will enable me to use all paths providing I have enough LUNs, I am keen to explore the Round Robin approach and will be going down this route in my implementation. You mention tweaking the IOPs Limit variable in ESX, could you elaborate on this?

    As I will be using two switches in a stack, I will opt to create a single iSCSI subnet to keep things simple.
    I will be keen to see your article on the Hardware iSCSI on HP Servers and also any article you may write in relation to the 4-port iSCSI controllers.

    Thanks again 🙂

  22. Thanks Justin – great article.

    We have a p2000 with 2 * 4 iSCSI ports. are we able to utilise all 8 ports for iSCSI traffic? Do i need to create 4 subnets on the p2000 host interfaces then 4 VMKernal ports? If I add 4 ports onto the 2 vmkernal ports, when I add them to the network configuration of the iSCSI adapter i get a non-compliant error.



  23. Well I dont think that you would need 4 VMKernel ports…. unless you wanted to use 4 physical NICs for iSCSI in each of your ESXi servers. I asked HP if they could get me one of the 4 port models to test out and see which way works best but I havent heard back from them yet.

    For your VMK non-compliant problem… you can only map one vmk port to a physical nic, so as soon as you tried to add the second one to the physical nic, that is why it was uncompliant.

    what does your iSCSI switching fabric look like ? (Number of switches, and what kind are they)

  24. Yes that’s what we were looking at doing to increase throughput. Do you not think this will be required. Each host is only running a handful of VMs with fairly low IOPS… We have 2 HP 24port gigabit switches, 8 iSCSI host ports from the SAN connected and 5 esx hosts with 6 or 8 NIC ports on each

  25. It would most likely not be required. I shot you an email too. Without tweaking some of the Round Robin settings and without having a lot of IOps’s coming from the box i don’t think you would see much gain from using 4 on the host compared to 2. But let me know if you want to test it out, I’ll gladly lend a hand if we can find a time.

  26. Thanks – i may test it anyway with 4 vmk and 4 physical connections and see if there is any increase in throughput – but that’s for another day

    many thanks


  27. Hi Justin, Thanks for putting the PDF together. It has been helpful. In figure 2, could you advise why you only used one NIC for the VM Network? It seems like one of the four NICs is going unused. Couldn’t you team the two NICs for the VM Network together for better performance?

    Also, do you have any updated documents for Vsphere5? I am configuring 5 on this HP San. I think your version 4 document should be a great start though.

    Many Thanks!


  28. Just got word today from the HP MSA team that they will be sending me a unit to so that I can update my documentation as well as fill in some wholes. I am also swamped right now with projects so I will try my best to get through it and update everything. It is all pretty much the same except for how you configure iSCSI on ESXi 5. As it is all GUI based now, and doing it from the CLI is optional

  29. Great. Thanks for the quick response, Justin. When you have time to respond, I am still interested to know why your design only used one NIC for the VMnetwork instead of teaming them together for better performance.



  30. Great. Thanks for the quick response, Justin. When you have time to respond, I am still interested to know why your design only used one NIC for the VMnetwork versus teaming two NICs together.



  31. I was probably focusing on the iscsi side and was using a lab box to get screenshots and didnt set it up because it would have taken more time. I would always recommend at least 2 physical nics on a vswitch.

  32. Your guide is great and HP should consider paying you for this service. It’s nice to hear they are working with you to help customers use their products effectively.

    We are patiently waiting (two plus months now because of the HD shortage) for our HP P2000 iSCSI Dual controller 4 port each SAN with extra shelf and our 3 DL380 G7 Hosts with 12 1Gbit ports on 3 NICs. We plan on using 2 HP 2510-58G switches and running Vsphere5 ESXi, Veeam and mostly Windows 2008R2 Datacenter VMs.

  33. HP direct.

    After reading your guide and the HP Best Practices (page 31) recommendations for connecting the SAN to the Switches. Here is what we will setup.

    Switch 1 Primary

    Switch 2 Secondary

    • Controller A port 0: to a port on Switch 1 Primary

    • Controller A port 1: to a port on Switch 1 Primary

    • Controller A port 2: to a port on Switch 2 Secondary

    • Controller A port 3: to a port on Switch 2 Secondary

    • Controller B port 0: to a port on Switch 1 Primary

    • Controller B port 1: to a port on Switch 1 Primary

    • Controller B port 2: to a port on Switch 2 Secondary

    • Controller B port 3: to a port on Switch 2 Secondary

    This configuration would give each Switch 4 1Gbit paths to the SAN 2Gbit to Ctrl1 and 2GB to Ctrl2

    The question is how to setup the VMware Host side.

    Here is what we think we need to do,



    VMk1 to vmnic4 on Pnic2 to Switch 1 Primary

    VMk2 to vmnic5 on Pnic2 to Switch 1 Primary

    VMk3 to vmnic8 on Pnic3 to Switch 2 Secondary

    VMk4 to vmnic9 on Pnic3 to Switch 2 Secondary



    VMk1 to vmnic4 on Pnic2 to Switch 1 Primary

    VMk2 to vmnic5 on Pnic2 to Switch 1 Primary

    VMk3 to vmnic8 on Pnic3 to Switch 2 Secondary

    VMk4 to vmnic9 on Pnic3 to Switch 2 Secondary



    VMk1 to vmnic4 on Pnic2 to Switch 1 Primary

    VMk2 to vmnic5 on Pnic2 to Switch 1 Primary

    VMk3 to vmnic8 on Pnic3 to Switch 2 Secondary

    VMk4 to vmnic9 on Pnic3 to Switch 2 Secondary

    Then to enable MPIO for VMk1-4 on all Hosts and Jumbo frames on the vmnics, switches and SAN.

  34. Hey Guys,

    I have ESXi 5.0 connected to the HP P2000. The SAN has 8 iscsi ports and my ESX box has 4 NICs. I entered one of the iscsi IP’s into the “Dynamic Discovery” tab in the iscsi software adapter. When I did that, all 8 iscsi ports from the SAN showed up in the “Static Discovery” tab. However, when I go to create a new datastore, there are no disks/LUNs available to choose.

    I have a volume already created on the SAN. I have setup CHAP on both the ESX and SAN sides. I have tried with CHAP and mutual CHAP. I am not having any luck. Can anyone think of what I am missing or some diagnostics I can run to find out?

    Many Thanks!


  35. Hello Again,

    This is now working. I did two things and I am not sure yet which resolved the issue. In the CHAP setup on the ESX side, I chose to “use initiator name,” instead of specifying a name myself. Also, I completely powered of the SAN shutting down the storage and then pulled the power cords for 2 minutes. HP support suggested that. It surprises me that a machine of that caliber can’t fully reset itself remotely. Anyone have any input on the rebooting of this SAN?

  36. I have not had to reboot one to get storage to show up. I will say that i add in my hosts before creating vdisks and such though. I then normally allow the lun to be accessed from any port and do not use chap. Only because I always separate iSCSI into its own vlan which is only accessed by the servers that need access to it.

    Stay tuned, I have a 4 port iscsi model on loan from HP right now and will be putting together a best practices guide with vSphere 5 in a week or three lol (big project coming up, might not get to it this week)

  37. Justin Thanks so much for your work here. Question for you. Just recieved our P2000 G3 FC dual controller array today. In the old P2000 G3 admin guide for ESX 4.1 it says that round robin queues i/o to the luns on all ports of the OWNING CONTROLLER in a round robin fashion.

    My plan was to create a single VDISK with a single VOLUME for Vsphere 5 with the new higher limitations for vmdk size, data store size, etc… HOWEVER it sounds like I would be limiting myself performance wise with only a single VDISK because only the OWNING controller would see I/O. Would it be best to split this in to 2 VDISKS with each VDISK assigned to a different controller?

    Can’t wait to see your VPSHERE 5,P2000 Guide. I could really use it!!!

  38. Thanks for reading David. Vsphere 5 isnt much different, just easier to setup the iscsi.

    How much storage is in your P2000? If it is more then 2TB i would probably create multiple vDisks.

    ALUA will allow IO to go to port that do not own the lun, but the controllers will pass that IO over to the owner to keep things consistent.

    Unless you have ALOT of disks though, you will probably not see much of a difference because the P2000 controllers are pretty awesome, and are made to handle up to 149 disks, so if you only have one shelf… it should be fine.

  39. Thanks so much for your quick response! I just racked our new P2000 G3 FC today. I am getting ready to provision it. I have 12 x 600GB disks and I was planning on doing one large RAID 10 array for performance. We will probably add a 2nd enclosure at some point with another 12 disks in a RAID 6 later this year. The idea is to use one LUN for SQL/Exchange with RAID 10, and another LUN for non IO intensive applications in a RAID6.

    So since my single LUN is only about 3.5 TB would I be ok or should I break this in half so that there are 2 LUNS each assigned to a storage controller? Or should I create the 1 LUN now and create the 2nd LUN later with the 2nd enclosure?

    Just trying to understand if there is any benefit to breaking up these LUNS smaller now with vsphere 5 and I am especially confused about controller ownership of a LUN.

    Also do you know if there are any issues with running a mix of HBAs? We have older 6GB FC HBAs in some of our ESX servers and are now using 8GB storage controllers and adding a mix of 8GB HBAs in.

  40. Great document Justin. I have a HP P2000 G3 1GB iSCSI and my config is almost identical with how you configured this example. Everything is running perfectly smooth with 1 host running about 6 VM’s on it. I have a second host (idential server to the 1st host – HP DL360, no internal storage, booting off USB internal and attaching to the iSCSI). I want to use the 2nd host for HA and possible even load balancing but I cannot get the 2nd host to see the datastore. In the management interface of the P2000 I can see the host and have a mapping for it. It says it was ‘discovered’. Within the storage adapter properties it has the same paths as the first server, but no connected targets. No matter what I do I can’t seem to get the 2nd ESX host to share the storage of the iSCSI

  41. Paul, when I added my Software iSCSI adapter to my host I had to restart the host (and then do a re-scan) to see the storage. Just a thought



  42. I have a couple hosts here at the house and a P2000 just like you have… ill see if i can mock that up. I really need to get this thing back to HP 🙁 but with moving i have had NO time to get the testing done that i needed to do.

  43. hi
    we have 3 servers esxi 5 3 server have 2 hardware issci adpater

    how do i have to add dymanic ip or static
    and we have to add the ip from A1 / A2 / B1 / B2
    in there ort just 1 for best speed

Post Comment