Because there is no document published on how to setup and run VMware off of an HP P2000 (at least that I could find), I have put one together after reviewing the P2000 FC/iSCSI combo unit that I was lent by HP (Thanks again guys!). I have created a PDF version of this for easy offline viewing, as well as prettier formatting. I tried to model this document after the “Running VMware vSphere 4 on HP Lefthand P4000 SAN solutions” document that HP produced after acquiring Lefthand. Therefore some parts of that document that are identical for the P2000 I simply copy/pasted into this one. Other parts have been created specifically for the P2000’s differences from the P4000.
Click here to download the PDF Updated to revision 2, which was a simple graphics change and added a change log to the end of the document.
I encourage everyone to leave a comment or suggestion. Ideally, if the guide is to be accurate it will require input from more then just one author.
If you are working with the SAS model check out this post over at virtualizetips.com. He has created a great setup guide for the SAS model, which can also be applied to the other models too… just dont forget to configure your port settings if your using anything except SAS.
Did you take a look at the PDF guide here ? http://jpaul.me/wp-content/uploads/2010/12/P2000-with-vSphere.pdf
It is for vSphere 4 but vSphere 5 is basically the same thing, there is just less involved in setting up the iSCSI initiators.
If you need more help let me know, [email protected] is my email address
is there any possibility to create virtual storage cluster like a combination of hp p2000 with internal disks in external sever? This vStorage will be accessed by VM cluster (vSphere5).
Node A of VM cluster is accessing hp p2000 storage and node B has internal disks + second part of VM cluster.
To create a datastore cluster (not sure if thats what your asking though) all datastores have to be accessible by all servers. So since local storage would not be accessible to all servers you could not include it.
Why not just make the P2000 accessible to both servers?
Hello thanks for reply.
Yes you are correct, but I found some sw solutions for that http://www.datacore.com/ and http://www.stormagic.com. This might create datastore cluster from external + internal storage. The only thing is performance. I cant find nothing related to it ;-(
Ok, you could connect hp p2000 directly into 2 servers, but problem is if you dont want to have a SPOF and you want to use 2x SAN.
If you have dual controller P2000 then it is not a single point of failure. As it has two controllers. This is the standard architecture between almost ALL san manufacturers out there.
sorry i expressed it in wrong way, in case of geo catastrophe or someone will steal it 🙂 then it might be SPOF.
do you have any experience with http://www.datacore.com/ and http://www.stormagic.com software in production?
I know what they do, that’s about it. Personally when i reviewed Software SAN solutions i found that by the time you bought all the drives and the software you were better off just going with a P2000 or VNXe
If the problem was only price, i think now its different. For example if you take a look at stormagic it cost $2000 for 2TB of SVA.
Hp p2000 only “box” cost $6000, 2x its $12 000 vs $2000 licence.
The only think is reliability, stability, and im afraid about it and cant find any technical answer. ;/
why do you need 2 p2000’s ? the P2000 has dual controllers in one box …
Well, physical damage or stolen box 😉
I’m setting up a P2000 which will service two ESXi servers (8 nics each) using 2 v1910 switches ISCSI only:
Do you know if there is benefits to using ISCSI offload features of our Broadcom 5709 nics versus the VMware software initiator?
HP san support is telling me that it is unsupported and that there are no benefits of extra bandwidth to having more than two nics dedicated to ISCSI traffic on each ESXi host. What are your thoughts on this?
the broadcom 5709 has some limitations. Mostly that it will not support jumbo frames. Also with CPU’s being as fast and with as many cores as they are these days the amount of savings is minimal… I havent done the testing, but depending on your workload you might actually see better performance by using the SW initiator and then using jumbo frames too… its hard to say without taking the time to test.
Also VMware only supports having one or the other for iSCSI, while it will work, if you use the broadcom offload cards AND turn on SW iscsi you may find that they wont support you if you call in.
Personally i would just use the SW initiator since you never know what you might want to do in the future.
Thank you for the quick reply.
Just to clarify my last post, I see that I was unclear. HP mentioned that having more than 2 nics in a esxi host with the P2000 is unsupported per their cabling setup documentation. They seem to support both sw iscsi and iscsi offload initiators. I was wondering if you know of others who use more than 2 nics for dedicated ISCSI traffic on thier esx hosts? Do you feel this config is this supported with the P2000?
On the offload option, it sounds like you are saying the main reason to go with the sw initiator is flexibility? Especially since the performance hit to the cpu is minimal.
We’ve found that in order to get the hardware offload to work correctly we need to create separate vSwitches for each nic. Do you know what is recommended for the sw initiator? A single vSwitch with two physical nics or two separate vSwitches for each nic?
Ive done the sw initiator both ways (1 or 2 vswitches) its normally up to the customer.
I dont see why more then 2 NIC’s would not work, but honestly your performance gains will be minimal. IF you need more then what two paths will give you your best bet will probably be to look at the SAS model or the FC model.
and yes the reason for sw initiator is flexability.
I’ve been reading through your document on connecting the P2000 to vSphere 4.
Since I’m using vSphere 5, is there anything in your document I would do differently for this version when setting up the ISCSI connections?
Thanks for posting this seriously great series of articles on the P2000 🙂
In your document the multipath configuration is based on two collision domains (“there should NOT be a link between the two switches”).
Since I’m intending to use an HP P2000 on the same storage network as an EqualLogic, this causes me a problem. However the HP documentation does show that a single collision domain can be used (see HP P2000 G3 iSCSI MSA System User Guide pg.40, “Four Server/SAN Fabric, 1Gb iSCSI”) and also later it shows dual controllers on a single switch.
It’s a shame and perhaps notable that there is nothing from HP on configuration for VMware, though they claim it’s certified! Did you try a single network?
A single network will work…. but you just have to be mindful of what is plugged in where in a multiswitch configuration.
Justin, many thanks. Eventually I found an official guide from HP too: http://tinyurl.com/6qzcvbm (“Configuration best practices for deploying VMware vSphere 4.1 on the HP P2000 G3 MSA Array combo controller”).
Thank you very much for the Doc. Just what I needed.
Hi, great doc, do you have plans for a esxi 5/5.5 version?
I’m deploying a P2000 G3 10GbE iSCSI soon on esxi5/5.5 with a HP 5406 as the core network.
Hey Steve! Thanks for reading. My only concern would be that you have at least one other switch in the mix. If for any reason you never need to reboot your HP Core switch you could find that your cluster will be useless, as it will lose all storage connectivity.
As for a 5.5 guide, probably not as i dont have access to HP SAN’s anymore. But what I will tell you is that the general concepts have not changed.
Hi Justin, That’s true, if it did reboot it would take down the whole lot.
What if I used a HP 2510 Gigibit switch for iSCSI B. It could be completely standlone, just for iSCSI B.
My only worry would be going from the 10GbE on the SAN to the 1Gb on the switch.
You could do something like that. The problem with 10 gig iscsi is that there isnt really a cost effective switch out there that is also affordable. to get more than a couple 10 gig ports on a cisco switch you almost need to go with a nexus 3k… i dont know the HP equivalent.
that is why i say in most cases… go fiber channel or SAS.
Fiber channel switches would run you about 3000 bucks for brocade DS-300b’s … cant remember if that’s is each or a pair… but even if its each you will be at about half the cost of what 2 – 10gig switches will be.
One thing you might consider is getting a lower end HP or Cisco and connecting the SAN at 10gig, and then connect the servers with 4x1gb…
Pingback: HP P2000 (MSA2000i) Review | Justin's IT Blog