I received a P2000 G3 FC/iSCSI combo array from HP in mid-December so I could do some reviews as a, sort of follow up to the G2 model that I blogged about over the summer. I racked it up in my lab rack and send out the management IP and credentials to the rest of the team so everyone who wanted could poke around. After all, the best way to sell a product is to get an engineer on-site and talking to the customer during another project, and tell them about why they need one. I’ve also found that if the engineers haven’t played with a product they will almost NEVER recommend it to a customer. Anyhow I asked them for some feedback and what they liked about it too; as they have also work with P4000’s and Celerra’s and stuff. So I’m hoping that this post with have several parts… my review and then the reviews from the other guys as well.
Overall I must say that I’m in love with this thing. As a matter of fact I can only say two non-positive things about the P2000:
- Having to clear Metadata before a drive will rebuild is a pain (definitely not a show stopper, because normally you would be putting in a replacement drive and this wouldn’t be necessary, but if your like us and popping drives out just to see what happens its an extra step)
- I didn’t get to keep it ! LOL, I guess I will just have to get by with my MSA1000 for a few more years until the baby is grown and P2000’s are on ebay for a reasonable amount that fits my lab budget 🙂
I could probably list close to 100 things I like about the product so I wont go into too much detail but some include:
- Super easy setup, literally took me 30 minutes from un-boxing until I was provisioned.
- 8Gb Fiber Channel really makes this unit shine.(iSCSI is a great way to start out with shared storage, but after I hooked up Fiber from my ML370 G5 to this puppy you could really watch it flex that 2GB of cache and the SAS drives… and I only have access to 4Gb FC cards… 8Gb would be even faster with more drives)
- Expandability, you can migrate raid levels, add drives, add drive groups, etc etc while its servicing I/O.
(the G2 model that I blogged about over the summer just got 4 more SAS drives installed it in for an exchange datastore and we were able to do it during the day without any downtime)
- Value. OK, this thing isn’t under a grand… but it is one hell of a buy… your not going to find one in too many labs just yet, but I bet we will see a lot more of these getting installed out in the field as people realize just how great of a deal they really are.
- Flexibility, in that I can mix and match drive types, and physical sizes. No it isn’t a Clariion with FAST technology… but if your looking to create some SAS datastores for VMware, and also a bigger SATA datastore for archives of backup data, or documents that aren’t hit as much, this baby can sport both SAS and SATA, and if your have really big data requirements, those 3.5″ 2TB drives should get the job done.
- YOU ALREADY HAVE DRIVES FOR IT! I’m sure that if you’re looking at this SAN, then most likely someone has already talked you into a pilot server with local storage. That’s OK, we will migrate those drive out of that server and into the P2000 so you don’t waste the money you already spend.
- ACU like interface. One thing that makes this unit popular with me is that, while not identical, its very close to the same interface as the HP Array Configuration Utility used to configure Smart Array controllers in HP servers. This makes it super easy to get going if you have HP servers that you’ve configured before.
- Compact design. the fact that I can fit up to 24 drives (SFF model) in a single 2u of rackspace is awesome! Just this single shelf, with drives on the market right now, I could be sitting on 24 x 600GB = 14.4TB of raw SAS space. Plus not only do I have 14.4TB of raw space I have redundant controllers, up to 4-8Gb Fiber Channel interfaces, or 4-10Gig iSCSI. Truly amazing. (Also as a side note if your into SATA space then 12 x 2TB will net you out at 24TB raw in the LFF chassis 🙂
- Speed. Ok it’s not a $500k dollar SAN, but it is still pretty quick for SMB applications. At one point during my tests I was installing 3 windows 2008 R2 servers, 2 Windows 7 VM’s, and then configuring 1 as a domain controller and one as an exchange 2010 server. All of this took place within about 90-120 minutes… Not too bad I don’t think. It was sustaining around 100MB/s and I would say that is as random as you can get. (note my drive config was a 6 Drive RAID 5)
- VAAI Support coming soon. – Although it doesn’t have it right now, it will have VAAI support shortly, this will allow much more efficient data transfer from VMware to the SAN. This is the most exciting news in the P2000 road map in my opinion.
I was looking up pricing on Fiber Channel hba’s the other day and I was pretty disappointed at the prices on them. For an SMB they are way to expensive. Come on guys $1200 bucks for a single port 8Gb HBA?? This is going to limit the number of customers who choose FC over iSCSI, especially now that 10Gb iSCSI is available for the MSA. If a customer does what FC storage I forsee some starting with 4GB HBA’s and not jumping all in with the high price of 8Gb.
Some comments from Nick at SMS.For an small SMB it is still quite an investment…For a 50 User network it is a must have.
I have never configured any HP storage, but I have others (vendors). It took me no time what so ever to figure it (the P2000) out. Really great interface.
It does one thing and one thing well. Provide Storage. No add-ons to bring it to its knees or provide other compatibility issues.
In closing I would like to thank the guys at HP for giving me a chance to put the P2000 G3 through the paces. I don’t know of too many manufacturers who would send something worth this much to someone to play with, so hopefully I draw enough attention with the articles to sell a few units for them.
Update: I just received hardware for a customer upgrade: P2000 SAS SAN and a bunch of drives!
Thanks for the really good review. Im a small hosting provider. we are hosting dc’s, exchange, and terminal servers on vmware 4.1. Im planning on buying the p2000 with 24 10K SAS disks, and 8GB FC – tomorrow. How much load can it take? we have about 25 vm’s right now running on really low end ibm ds3200 iscsi crap san!
You should have no problems if your going to run 8Gb fiber channel, Check out some of the other articles I’ve put together, they should the statistics that I’ve achieved with 4GB fiber channel and iSCSI as well as best practices on how to set it up with vmware
Also I would recommend that you create at least two vdisk and volumes if you have that many disks. and then make the primary controller for one of the vdisks A and another one B that way you will load balance that traffic more efficiently across the controllers during normal operations.
Just noticed this new post, was reading your other ones on the P2000 yesterday as your blog posts are featuring pretty high up on Google these days 🙂
We’re planning a virtualisation project at the moment, 16 physical servers going down to 3 and have been considering various SANs including…
– EMC VNXe
– HP P2000 G3
– HP P4300 G2
– Oracle S7000 series
– Dell EqualLogic
– NetApp FAS2020
Some of those look a bit out of our budget or we have issues about the licensing for various add-on features so the P2000 is looking the best fit as it stands. We like the look of the P4300 as well but prefer to be able to mix SATA and SAS on the P2000 to get best value out of the storage.
What we’re looking for is something solid to run our virtualised servers on 3 DL380 hosts (not sure if it will be Hyper-V or VMWare yet) with enough performance for 400 users in an education environment. Do you think the P2000 fits the bill (iSCSI version), provided we spec enough SAS 15k disks for the VMs, Exchange, SQL etc.
Just sent you an email. Thanks for the comment!
We are going through a similar project as Gerrard Shaw. We have 12 servers which are to be virtualised using Vmware essentials plus.
What are your thoughts on the best solution.
Nick, Thanks for the Comment!
I shot you an email to the address you used for the comment. For those who are also looking at these solutions and have questions about if they are the best solution:
I would encourage everyone to work with a VAR. If you can find a VMware/HP VAR even better. All VMware partners have access to a Capacity Planner tool that can show you the workload on your physical servers and determine the number of IOps, amount of RAM, and amount of CPU power you will need to make a successful virtualization project happen.
If you do not know of, or are not working with a VAR in your area let me know and I will see if I can track down one. If I cannot find a local VAR, we can see what we can make happen about getting the capacity planner ran for you and help design out the best solution for you.
Good story, thanks for the info. I’ve bookmarked your blog.
Great review Justin It featured highly in my decision to purchase this MSA. I decided to go with the 1GBe iSCSI controllers. I was pleased with the initial testing, performance was good, the UI was good and simple all echoed in your comments.
However I though it would be important to highlight some fairly serious issues around this MSA. During our preproduction testing of the MSA we decided to shut the device down in a controlled manner and change the order of the disks, something HP advised was OK during the SR that followed. I shut down each controller and powered off the unit, I jumbled the disks around and powered back on expecting all the volumes to be available again shortly, or so I thought…
Both controllers had a fault condition reported. To cut a very long story short, HP support ran a few commands in the CLI and told me both controllers had failed and would need replaced and it was likely that one had killed the other! They offered no reason as to why other than to say that sometimes one failed controller can knock out the second!
I logged the issue with HP on the 17th March, and have just been told today the replacement controllers will not be available for delivery until the 13th April due to there being no stock available in the warehouse. Read into that what you will.
Otherwise Great little Storage platform that I doubt will make it into production for our organisation!
One thing to remember is that the MSA is still a form of Smart Array controller. If I recall correctly it is not advised to move more then 1 disk drive at a time. HP support said it was fine to move all the drives around ?
HP explained it was not recommended but the Array would cope with it. I was testing a lift and shift scenario where the drives may not have been reinserted in the correct order. I expected at worst an unhappy array with disks in a Lost/Unknown state, not two bricked iSCSI controllers and a month’s wait for new ones. My concern is not the technology (I admit is was being bad to it but nothing that wasn’t feasible in the real world!) It is the lack of any useful warranty support and limited troubleshooting from HP. I cant even go out and buy a controller, HP have none in stock anywhere according to my supplier.
i see. I just checked Ingram Micro and they have the controller listed as a special order product.
Email me at [email protected] with your name and Hp case number I will see if I can help.
Have you noticed the HP Fibre Channel Enclosure Svc Dev with LUN0 in vCenter Client Storage Adapters?
I wrote a blog post about that at http://deinoscloud.wordpress.com/2011/04/05/hp-msa-channel-enclosure-svc-dev-lun0-and-mask-path/
I don’t see that behavior on all MSA2324 devices… Look likes the firmware revision plays a role here…
Did you experience that as well?
Well I do not have a Fiber Channel P2000 G3 right now, but I do have access to a SAS model and it is indeed in there as well. I wouldn’t doubt that the Fiber Channel models do have that in most cases.
Your blog is great… glad I found it! I just recently purchased the G3 SFF SAS model for two DL580 G7’s ESX hosts. The setup was easy and this is an awesome little unit.
I was wondering what kind of Read speed you were seeing? From a Linux VM, I’m seeing 160-190MBps read time results using the “hdparm -tT /dev/sdxx” command. The results are similar whether /dev/sdxx is a vmdk or RDM disk. (RAID 5 array with six 300G 10K SAS disks).
Running the same command from the ESX console I see about a 15% increase in read speed (500MB vs 582MB in 3 seconds), which is expected because I am bypassing the VMware hypervisor altogether when running hdparm from the console (I think).
Do you know how these numbers compare to other units, or if 180MBps a decent speed? Thank you.
The SAS model is awesome! Unless you do multipathing you would never see 180MB/s from a SAN unless you had 10Gbps ethernet or FC… so for the price you can go wrong.
Your numbers sound about right, the only way that I was seeing anything faster was when I had 6 drives in a RAID10… then I was seeing about 300MB/s on sequential reads.
Because the SAS model is running at the same speed as the drive’s backplane you only have two possible bottlenecks… the controllers (CPU normally) or just a lack of spindles. And since the P2000 G3 is rated for up to 149 SFF drives, if you only have the one shelf my guess would be spindle count… so if you ever need to go faster you will want to go to RAID10 or start adding spindles 🙂
Thank you for the quick response. RAID 10 is definitely the way to go for sequential I/O (if you have the budget). I thought about more spindles but I hesitate to move beyond 6-8 disks per RAID5 array because from what I recall there is a tipping point where the read/write balance gets unbalanced. That may be old school thinking?
For example I have another project that requires lots of storage for online internet data backup. I’m thinking about the p2000 iSCSI LFF with 12 2TB drives. I’m too nervous to create one big 12 drive array, but I would lose 4TB by creating two RAID6 arrays versus one. Decisions decisions! I was going to choose RAID6 because the 2TB drives are midline (MDL) drives and the risk of losing a 2nd while rebuilding an array is higher.
Great write ups on the P2000, I just followed your setup Rev. 2, on a DL380 G7 (2 x 146Gb 15k SAS Raid 1) and a P2000 G3 (6 x 1Tb 7.2 SATA, Raid 5 – 1 hot spare) and two Dell Gigabit switches for the iSCSI subnets. I was trying to test performance and have a 20Gb vm folder on the Raid 1 datastore, I copied that folder from the raid 1 datstore to the SAN datastore and back with Veeam backup and fastSCP, I was only able to hit 33MB/S both ways. Youre review said you hit around 100MB/S. Any thoughts on where the bottleneck might be?
My initial guess would be the SATA drives. But do u see multiple paths in VMware to your lun? Also double check to make sure round robin is selected and not mru
Hello all, just to share an issue we are experiencing with SAS multipath, if someone can help.
We bought a P2000 G3 SAS and we have started the first installation and configuration with a DL 385 G6 with 2 SC08E HBAs, to have a fully redundant dual-path configuration.
The cabling layout is the following:
Server1 HBA1 – Port1E –> ControllerA Port2
Server1 HBA2 – Port1E –> ControllerB Port2
The server is running VMware ESX 4.0 Update 2 with the latest updated driver downloaded from VMware site:
We also run the Firware Maintenance CD version 9.20B on the DL 385G6 server, to be sure to have the latest firmware.
The P2000 is running the latest Firmware version T200R21.
We have good experience with FC and iSCSI connectivity, while this is our first SAS installation, and I was supposing it should be simpler and straightforward than FC 🙂
We have created a 1TB Volume mapped to the 2 HBAs on both P2000 controllers, with Explicit Mapping.
The problem is that the LUN is correctly visible on one path, but the other path is seen as DEAD.
ESX in /var/log/vmkwarning is reporting something like this:
VMW_SATP_LOCAL: satp_local_claim: VMW_SATP_LOCAL does not support multiple paths per device. Refusing to claim path vmhba3:C0:T0:L1
From the Manage Paths we see a Storage Array Type as VMW_SATP_LOCAL with a default Fixed path selection policy.
the suggested policy seems to be “Vendor_Unique” but I don’t know the meaning.
We’ve searched around for some documentation but we didn’t find anything useful.
Is there some particular configuration to do in order to achieve multipath ?
Thanks in advance.
Yes round robin is setup and I show two paths as I/O Active, I followed your setup to the “T”…I am going to load two VMs, one on each datastore and try a file copy between the two. Ill report back.
I loaded a Server 2k3 Machine on each datastore, and copied a 1Gb folder to one of the Servers. Shared the folder and used Vice Versa Pro to copy the 1Gb folder from a mapped drive(SAN datastore Server) to a local folder(Local datastore Server) and maxed out at 80MB/S, but was more consistent at 50MB/S. I think this is acceptable, does it sound about right to you?
Your blog is very helpful to me as most of this stuff is over my head – you don’t speak over my head but you express a lot so I appreciate that.
Have you tried using two P2000 G3’s together in tandem with VMware 4.x, in a redundant manner? HP and VMware state the P2000 G3 is fully supported in 4.0 and 4.1 but my VAR says a pair can’t mirror each other in this way within VMware. Any comments, tips or kicks to the butt would be appreciated.
You can replicate the P2000 G3 models (as long as they have the iSCSI interfaces, so the iSCSI/FC Combo, and the iSCSI models can replicate) I believe it takes a replication license on both boxes. However this is an asynchronous replication… meaning it takes a snapshot and then copies that data to the other SAN. HP does have several other SAN’s that can do other sorts of mirroring, and if your not just an HP short the new VNX and VNXe series does this replication too.
Would it be possible to build a 2 node SQL server cluster and a 2 node Hyper-v cluster using SAS HBAs on the P2000 G3 SAS dual controller model? Assuming each server had 2 HBAs for dual pathing to the P2000.
For the SQL side of your project check out the HP whitepaper here on just what your talking about http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA1-8969ENW.pdf
as for Hyper-V…. you may want to check their HCL list…. Im sure it would work but i dont do much with hyper-v so i couldnt make a “for sure” statement on it.
Thanks for the blog on this we are looking to purchase one of these units. In your review you noted that we could “mix-n-match” drives. Are there any stipulations with that or do the drives operate at their normal speeds?
In the P2000 SAN you are able to group SATA drives together and also put in a group of SAS drives. You can also mix the sizes of drives, but I guess i should note that when you create raid groups.. all drives in that group should be the same size and same speed. So you could put in a group of 300GB SAS drives that spin at 10k RPM’s for Databases and Exchange servers, and then also put in some 500GB or 1TB SATA drives that spin at 7200 RPM’s for file storage and archiving.
Mistakenly, we recently bought P2000 FC SAN with DC power. Following is the configuration:
AP841A HP P2000 DC-power SFF Chassis
AP836A HP P2000 G3 MSA Fibre Channel Controller
Since there is no DC power in the Datacenter. Please let me know if we replace the Power supply from DC to AC, will it work?
If yes, what would be the Part No. of these Power Supplies.
Thanks in advance.
I would call your distributer, they should be able to help. But i believe the HP Spare Part number for the AC Power Supplies is 592267-001
Would you recommend using the vaai plugin for the mSA 2000G3 FC?
Im not sure of the features of this plugin, but i can see that you are looking forward to it – and its here now!
Its not avail. for esxi 5.0 – should i give it a test run anyway?
If I bought one of these with 12 x 2TB SATA drives, would it be capable of pushing over 100MB/s to a VMware host. We hope to use 2 x 1Gb connections using iSCSI and MPIO.
Love the blog and have subscribed to RSS.
the 2TB sata drives will not be the fastest, so it will really depend on if you are doing large file transfers or small file stuff.
But theoretically if you setup multipathing correctly it should have no problems handling that
at the moment i used 16 slot of my P2000 with raid 5 and 1 spare disk, and now i want to add 8 more disk to P2000,so far i got some documentation that raid 5 doesn’t suport more than 16 disk and should use Raid 50, is that right ? and do you have any idea how to upgrade the raid level for P2000 ?
If you want to add more drives for capacity you will need to create a new raid group. If you want to convert to RAID 50 you would need to add 15 more drives …
What are your expectations by adding these next 8 drives ?
I have a P2000 4x 1g iSCSI (dual controller) and have installed the latest firmware. In reason 3 above you say you can migrate the RAID level but I cannot find a way to do this. Do you how this is done?
“Expandability, you can migrate raid levels, add drives, add drive groups, etc etc while its servicing I/O.”
what level of raid are you at now and what level are you trying to get to ?
thanks so much for perfect posts!
Is there any possibility to get synchronous replication in between thow of these storages? I thinkg p4500 has it but the cost is like a double.
We are SMB – around 60 people, so I dont know if the p4500 is price efficient.
Thanks a lot!
now i found you mentioned Open-E DSS V6, so maybe we should handle online replication in between two of p2000 with this software?
I wouldnt … Open E is a poor mans san solution in my opinion. the P2000 has the ability to replicate to another P2000 build in… you just need a license to enable it.
Or you could use Veeam if the SAN is being used for virtualization.
what problems are you trying to solve with synchronous replication?
I want to use it for IT infrastructure with virtualization. Well, synchronous is “in time”, I mean when something happen you wont lost any data.
Based on your phase 4 i thought you use synch replication. “http://jpaul.me/wp-content/uploads/2010/05/Virtualization-Phase-41.jpg”
still im not sure if https://alteeve.com/ (KVM + RDBD) is not better replacment for VMWARE + hp storage in case of SMB. (im trying to figure out).
but are you going to put the SAN’s in different locations ? The reason i ask is because you could do something like the P4000, but if you are just going to have two boxes right on top of each other then why isnt the built in redundancy enough? with dual controllers you rarely see a complete san failure, no matter who the vendor is.
Also the only thing i would say about using KVM and DRBD is supportability. While I am an advocate for open source software when appropriate, I would also say that sometimes you are better off with a comercially available solution. I can think of several clients that I have installed VMware and a commercial san for that started off using something open source but quickly found that they couldnt support it unless they wanted to spend the extra time doing it.
Think about zimbra for example, its a great product, in fact I use it myself. BUT for a customer to switch from exchange to Zimbra they are going to have to be willing to put in some training time right now, because you can find an exchange guy at every starbucks …. but someone who knows how to support zimbra is much harder to find, and when you do, they can charge almost whatever they want because they know their skill is rare right now. Give it 5 years, and i think you will see a huge shift and people will be deploying zimbra as often as exchange.
I wanted to put them into different buldings, the reason is only if something happen in the first building (robbery.., flood) everything gonna be ok. Why P4000?( I looks like p4000 cost double ;-( )
Yes, you are right, the worst thing is support + they can charge a lot more money.
But, you know if you setup open source correctly, it could work ages without any change/update 😉
an entry P4000 (two nodes) with about 2.7TB usable is 30k list price…. if you work with an HP VAR you should have no problem getting 20% off… the reason a P4000 is double is because you are getting two nodes… which could be seperated, and you are getting software that keeps the nodes in sync much like DRBD does… but it has alot more features and is supported.
I agree, but everything runs great if its setup right…. its when it breaks that you have to worry 🙂
Well 30k, i dont think that SMB with 50 users + about 4 virtuals need that much expensive storage. Maybe its be easier if i could use some fact/ to make right decision, or in this case its all about money?
anyway while reading your another posts, its looks like you rated p2000 like better “solution” over p4000.
my results from p2000 g3 sas!
on the VM level I got 1500MB/s
using 12x 600GB SAS drives