I’ve done several posts about the P2000 series of SAN’s from HP, but all of the posts had to do with the iSCSI or Fiber Channel models. This post is to catch you up on the SAS model and what it can provide for you. I decided to put this post together because after my SMB Cluster v2.0 post I got some questions on the SAS model.
(Rear View of a P2000 SAS Model)
SAS is the same technology that is used inside of most new servers these days, think of SAS as the industrial version of SATA. Therefore you can think of the P2000 SAS SAN as the industrial version of eSATA. The technology is used by alot of major manufacturers to connect additional disk shelves into the controllers, these include (but not limited to) EMC, HP, NetApp, Dell, Compellent. However, not many manufacturers actually connect to the computing power (ESXi/ESX nodes in this case) with SAS, the P2000 does have that option though.
Let’s first talk about why you would want to pick the SAS P2000 SAN over all of its sister models. To start I would mention the most obvious reason: 6Gbps SAS connectivity is much faster then 1Gbps iSCSI. Even if we do multipathing with iSCSI we still would not achieve 6Gbps into a single ESXi node… at least not without using 6 gigabit NICs per node. Next is probably the biggest seller for me, and that is direct connectivity; meaning that we need no switches between the ESXi nodes and the SAN. This will save us thousands of dollars. However if you grow more then 4 hosts you could purchase a SAS switch from LSI and expand the SAN beyond 4 hosts.
Provisioning of the SAN if very easy and simple too, and you have complete control over the disk drives and how they are allocated. (Not the case in the EMC VNXe competitor) So if you want a 2+2 drive RAID10 volume you can do that, or if you want a 6+1 RAID5 you can do that too.
The other thing I really like about the P2000 (be it the SAS or iSCSI or FC) is that you can reuse your 2.5″ SFF drives in this chassis.
The downside of the P2000 SAS SAN (if you want to call it one) is that you can only connect 8 hosts to it (4 if you want redundancy). Sure you could use a SAS switch to expand that number, but then you are looking at 3k per switch and then another 450$ list for the shelf that holds them. So the total of about $7000 dollars, but that is right in line with a pair of Cisco 3750 switches I suppose.
This SAN will also (most likely) NOT integrate with your current fabric… most places that have a current aging SAN are either iSCSI or Fiber Channel, so with this SAS model you will be starting from the ground up…. which may not be a bad thing depending on where you’re at today.
The only other downside that I see from the P2000 is that its competition now offers is a unified solution. The VNXe is able to serve CIFS and NFS without any other front-end device. I don’t know the roadmap but my guess is that this may be remediated when/if the P2000 line moves to the Store 360 OS. Personally, I don’t have much use for a SAN that can do CIFS shares, but I do see benefits to adding NFS to a box like this.
The Bottom Line
In my opinion, the SAS P2000 is the best way to go if you are going to purchase VMware Essentials or Essentials Plus, and don’t foresee yourself upgrading past 3 hosts in the next 3-4 years. Why? Well if you read my SMB Clusters version 2.0 post it will explain my reasoning, but to put it simply we don’t want to put cash into SAS switches, and we still want to use a direct from SAN type backup method… so that fills up the 4 port controllers. It’s also a very easy SAN to work with and gives great value/dollar spent even though it is just block level storage. This model, like the rest of the P2000’s, keep things simple… they provide block level storage to a server… that is all.
As always if you have a question let me know.
“The other thing I really like about the P2000 (be it the SAS or iSCSI or FC) is that you can reuse your 2.5″ SFF drives in this chassis.”
Be very careful with this phrase. There is a limited number of drives that are directly compatible. Check the MSA’s spec sheet for details of what drives are compatible.
Actually the 2.5″ series of proliant drives is fully compatible… it’s the 3.5″ drives that can have issues from what I’ve read.
Can you hot add additional trays?
also what’s the best cabling method to allow additional tray to be added in future without downtime?
thanks for posting about your experience with the P2000 G3. I’ve got personal experience with its predecessor (MSA2000G2), and it seems to me that you fail to mention one obvious point of the SAS model. As I see it, this model is targeted towards the C3000 blade enclosures, as the 6Gb SAS Blade switch will allow you to connect all 8 blades in the enclosure in redundant mode to up to 4 P2000 shelves. And the two-pack 6Gb SAS Blade switch is only around $4000.
So are you looking to purchase the P2000 G3 – think blades 😉
Only one issue with that statement that i think is flawed…. most people who are looking at the P2000’s would never spend the kind of cash you need to get into a blade setup. LOL, in the past when someone is looking at blades they are probably thinking EVA
Talking with a guy at HP, he claims that you can only connect 1 server to it with out a SAS switch. I tried to explain that there are 4 SAS ports per card, and he said they are only good for redundancy. I asked him if he know people that do 8 way redundancy, but he continued to explain that it is required because the MSA’s do not have built in layer 3 switching.
Unless my understanding of SAS is wrong, I don’t know what layer 3 switching is needed.
he is wrong. We have many setups out there that are direct connected.
However if you grow more then 4 hosts you could purchase a SAS switch from LSI and expand the SAN beyond 4 hosts.
The HP P2000 SAS G3 is not on the HCL of the LSI SAS6160 switches.Does any know if this setup will work? Are the LSI SAS switches compatible?
Yes They work. We have used them at a client before.
Here is likely an odd question to be asking here but you are the only person that seems to have actually used this setup (and posted about it)
If you connect the sas msa to the lsi switch does the switch create a bottleneck when it is only connected to one of the 4 sas ports?
I can’t seem to figure out if the 4 sas ports on the controller are 4 lane or single lane.
I would think that you would bottleneck at the controller because it only has 6Gbps out to the drives anyhow… But the setup that I have the LSI switches on does not come close to pushing that kind of storage bandwidth. I would say that if you are worried about getting close to the 6Gbps limit then upgrade to the fiber channel model with fiber channel switches.
Hi Justin. Nice Blog!
I have a P200 G3 SAS controller with 3 enclosures filled with 2TB drives – 72TB.
I RAID 50 them to give me 60Tb of Storage. No problem.
BUT – In WIndows Server 2008Std; I see 2x 55Tb Partitions. WTF? I format them GPT; 64k and they pop up as two x 55Tb Drives. How can this be? Is this an MS problem; firmware or what? Please advise. Don’t see compression anywhere and don’t feel I can trust this??? Please help.
You will see two, because you have multiple paths to the storage. However because microsoft does not to round robin or any other true multipathing (that i know of) out of the box. You should see two devices, but one should be offline or something. Because that is the one shown from the secondary contoller (or whichever controller does not Own the lun)
Justin, Can’t you configure MPIO so it only shows once, and then it will do round robin or whoever you configure the MPIO?
If you have the dual controller model of SAN and also the dual port SAS card, you should have two paths to each LUN, one will be to the controller that owns the LUN and one to the other controller. With the SAS sans I did not normally configure Round Robin as there would be no real benefit to it. 6Gbps SAS directly to the array is line speed of the drives.
Justin, is there any best practice with the dual controller and sas interfaces with vmware 5?
I only find iscsi best practices.
Just want to know, if there are any specific settings for vmware?
Just plug it in? LOL, there isnt much configuration that is needed. The only thing that you will want to do is set the Path Selection to Fixed path, and make sure that it prefers that path that “owns” the lun.
Is this a solution that would work for VMotion between hosts? Or would this not be shared storage due to the SAS aspect. I appreciate it.
All of the P2000 models will support vmotion. The only thing that matters is that both hosts see the same datastore. As long as that happens you are good to go 🙂
Like the blogs on the site. Ive installed a few p2000’s over the years all iscsi. Im doing a SAS implementation with 3 vsphere hosts – each with dual hba and was interested in the cabling configuration.
having checked out the instalation and cabling guide from hp theres a diagram for 2 hosts and 4 hosts but none for 3 hosts.
in G2 I used the guide in link but cant find similar for g3. is there a rrecommended way to do this for g3?
HP StorageWorks MSA2000 G2 cable configuration guide
it should be the same configuration as the two host…. there are 4 sas ports on the back of the controllers so just go to the next one in the controller and cable it to the host 3.
Hi Justin, I’ve been tasked with connecting a new P2000 (dual controller,6x2TB drives in RAID5) to an existing MSA60 (single controller, 10x1TB drives in RAID5) – the MSA60 is being used for backup-2-disk (BExec) and has over 8TB of data which I’d prefer not to lose. Can’t find much on web but hope you can help. Thanks in advance.
Well the MSA60 is a shelf with no real controller in it… it must be hooked to a Smart Array controller in a server or to an MSA. That being said the MSA should recognize the array group from the Smart array controller and not destroy any data …. at least that is what it would do when you pull drives from a server and plug them into an MSA…. just make sure the MSA is shutdown when you plug in the shelf of drives. Please note that while i have done this with drives from a server into the MSA/P2000 i have not attached a whole shelf with drives already in it… you may want to call HP support to confirm.
We will purchase a P2000 SAS model with dual controller and are going to make one big 9TB RAID10 volume on it. We use 3 servers with VMware 5 with two SAS cables per server, one to each controller.
How should I configure path selection in VMware? Round Robin to spread the load between the two controllers on the P2000 or fixed ?
You can leave the PSP set to Fixed or MRU, as it really wont matter on the SAS model of P2000…. mainly because most people only have 1 2 Port SAS controller per server. Because you have one cable going to each controller you therefore only have one path to the controller that owns the lun and because of that only one path will show up as “Active (I/O)” and the other will just be “Active”.
If you were to put in two SAS HBA’s per server and run two cables to each controller then you would see the SAN paths more like a Fiber channel or iSCSI environment with switches…. you would see 4 paths per lun …. two “Active (I/O)” paths and two “Active” paths.
On a side note, i would not make a single 9TB RAID 10 volume…. i mean if you need the performance of that many spindles cool…. but having all your data in one failure domain seeems bad…. i would probably break it up into two RAID 10 groups.
Do you have a preferred SAS HBA’s on the server side (assuming they’re on the vSphere HCL) ? I’m speccing some Gen8 Dl385’s and just wondering which to go for or, more importantly, which to avoid
Your best bet is to look at the spec page for the P2000. It will list which HBAs are compatible with the SAN, it used to be the SC08e card…I havent worked with the SAS P2000 in several months so im out of the loop.
Thanks for the great Blog. We have a P2000 SAS SAN and an LSI 6160 SAS switch. I’m hoping to draw on your experience with the client you mentioned who has a similar configuration. In this deployment scenario we are using a single LSI 6160 switch and a dual controller P2000 SAN. We were planning on cabling one SAS port from each controller to the switch and cabling each server into the switch. I know that the SAS switch is the single point of failure here, but that meets the client’s requirements for availability at the moment (they may get another SAS switch for redundancy in the future). Based on the client that you set this up for is this an acceptable cabling configuration? Do you have a better cabling solution? This SAN will be used for storing VMware and Hyper-V virtual machines (this client uses both hypervisors).
just make sure to setup the zoning on the SAS switch, otherwise it will get confused. you will need to basically zone the sas switch so that logically it looks like two switches, then create an A side and a B side. Cable respectively after that.
We have some servers running in a virtual environment currently using iSCSI. We are looking to purchase new servers and storage to create a better virtual environment. Our goal is to move our database to a virtual server. We are looking at purchasing a couple HP servers and a P2000 storage unit. We are looking at either SAS or Fibre Channel. We were told that you cannot do vMotion as you do not have true shared storage with SAS. From what you’ve said, that doesn’t sound correct.
My questions are:
1) Can we accomplish vMotion and High Availability with SAS?
2) Which method SAS or FC would you recommend?
3) Do you see any issues or have any suggestions for running a large MS SQL server in this environment?
Just curious about your thoughts on the above post?
Thanks for your feedback on zoning the SAS switch. I’ve been through 3 SAS switches from LSI now (4th replacement is in the mail) and I’m losing confidence in their product. My latest interaction with their tech support has their tech trying to get me to believe that the SAS switch is not a “hot-plug” device. I’ve plugged in new hosts to the SAS switch with it powered on and it flashes the lights, spins up the fans, reboots, spins up the fans and bricks itself, never to be configurable again. The tech support fellow at LSI mentions that before plugging in any new devices to the switch the switch must be powered down. I’m willing to try that theory on my next replacement but I’m wondering if this mirrors your experience with this switch. If this is true, then it makes it quite a bit less useful as I’d have to shutdown the SAN and all hosts every time I wanted to add a new host to the SAN. Doesn’t that go agains the very reason we build SAN’s in the first place?
Hi, great post, thanks. I this post is 2 years old but right now we have a p2000 dual controller SAS MSA…connected to a HP DL-58o.
My question is, can I boot the server from the p2000? most I found about is booting from p2000 with FC o iscsi… nothing about SAS connection..if so, how can I achieve this? what setting should I set in the bios/sas hba bios?
the dl580 has a sc08 SAS HBA.. so from each controller in the P2o00 going to the server, same SAS HBA card..
the OS will be RH
I have already set up several iSCSI HP P2000 with HYPER-V in a Cluster Shared Volumes environment but I don’t know how to proceed with a SAS MSA HP P2000.
Will I have to create LUN’s just as iSCSI and after connect each node of the cluster with the iSCSI initiator ?
Great post. I find it strange that manufactures of these arrays don’t advertise the fact that SAS connections when using vSphere clustering also works fine, I have many installs setup with the Dell MD3200 SAS unit (yet to try the P2000 G3) and they work great and provide better performance than iSCSI. Wonder if anyone has a link to a document which shows the official support of this unit using SAS connections with a e.g. three node vSphere cluster.
Is it possible to reduce the SAN size while it is hosting VMs?
not sure what you mean. can you elaborate on what you are trying to reduce ?
We have Hyper-V two node cluster using HP P2000 SAN for VMs (raid5). there is only one volume configured for VMs and no more space or volume on the SAN. To use parallel backup for CSV we are supposed to have snapshot volume as well.
I am not sure if it is possible to modify/shrink the SAN HP P2000 volume. would it cause the issue for cluster? The edit button on the HP SAN is greyed out.
I need your help to improve the multipathing in my job, I have 3 esxi(2xDL380G7,1xDL580G7 with 4nics) and the cabin HP P2000 G3 SAS all mounted in a Vmware 5.1 enviorenment.
I install the VAAI plugi-in of HP, and configure the iSCSI Multipathing like your blog, but in the datastore always appears 1 ACTIVE I/O of two visible LUNS for each datastore, I’m unable to activate both at the same time.
Every esxi has 2 SAS ports, and every one of them are connected one to the A1 and the other to the B1 of the P2000 cabin. 2×3= 6ports and 2 free ports(1 of them are connected to other server for backups).
I dont know where is the problem to active the Round Robin PSP and activate in the 2 ways, Can you help me? Thanks.
With a SAS SAN you arent doing iSCSI multipathing. … with SAS you will only have 1 active i/o path and one active or standby path…. both paths will not be active I/O unless you have two cables going to the owning sas controller.
Ok, I had tried all the possibilites, now I know why dont works, thks.
Which PSP did you recommended in that case, Fixed, MRU, or Round Robin? And the last thing, I changed every datastore from 1000IOPS (Vmware defaut) to 1IOPS recommended by HP to active the multipathing, if i dont have the multipathing which IOPS is the correct setting?
Thanks a lot!!!!
its not really going to matter because the paths wont be switching. But MRU is the option you will want to go with as ai believe Fixed path was removed in 5.5 for ALUA arrays
Pingback: Aufbau vSphere 5.5 Cluster - Empfehlungen?
I need help to configure HP 2000 storage to connect directly to windows server 2008 R2 through QLogic 2650 FC channel.
I’m new to storage & i need to connect storage to this server.
Please guys your support will be highly appreciated.
I am from Brazil and having my first experience with STORAGE/clustering setup
I would like to ask you something to make sure I have connected the cables properly
I have 2 DL360 G7 with a LSI SAS2 2116 (with 4 ports/each)
and a StorageWorks P2000 G3 with 4 SAS ports each controller, exactly like your picture at the top of the page.
How should I wire them?
I have done so far,
– SERVER 1 PORT 1 to CONTROLLER A PORT 1
– SERVER 1 PORT 2 to CONTROLLER B PORT 1
– SERVER 2 PORT 1 to CONTROLLER A PORT 2
– SERVER 2 PORT 2 to CONTROLLER B PORT 2
The thing is, servers can see the volumes how ever disk management shows 4 disks and I only can make cluster work so far if I disable the last 2 detected SAS disks.
As this is being my first experience, before fight against OS configuration, I would like to make sure I have connected the cables correctly.
Would you be able to help me?
The reason you are seeing four disks is because you have multiple paths to the array. So two disks are through path 1 and the other two are through path 2. You need to install mpio in windows and mark the disks as multiple path disks.