I received a P2000 G3 FC/iSCSI combo array from HP in mid-December so I could do some reviews as a, sort of follow up to the G2 model that I blogged about over the summer. I racked it up in my lab rack and send out the management IP and credentials to the rest of the team so everyone who wanted could poke around. After all, the best way to sell a product is to get an engineer on-site and talking to the customer during another project, and tell them about why they need one. I’ve also found that if the engineers haven’t played with a product they will almost NEVER recommend it to a customer. Anyhow I asked them for some feedback and what they liked about it too; as they have also work with P4000’s and Celerra’s and stuff. So I’m hoping that this post with have several parts… my review and then the reviews from the other guys as well.
Overall I must say that I’m in love with this thing. As a matter of fact I can only say two non-positive things about the P2000:
- Having to clear Metadata before a drive will rebuild is a pain (definitely not a show stopper, because normally you would be putting in a replacement drive and this wouldn’t be necessary, but if your like us and popping drives out just to see what happens its an extra step)
- I didn’t get to keep it ! LOL, I guess I will just have to get by with my MSA1000 for a few more years until the baby is grown and P2000’s are on ebay for a reasonable amount that fits my lab budget 🙂
I could probably list close to 100 things I like about the product so I wont go into too much detail but some include:
- Super easy setup, literally took me 30 minutes from un-boxing until I was provisioned.
- 8Gb Fiber Channel really makes this unit shine.(iSCSI is a great way to start out with shared storage, but after I hooked up Fiber from my ML370 G5 to this puppy you could really watch it flex that 2GB of cache and the SAS drives… and I only have access to 4Gb FC cards… 8Gb would be even faster with more drives)
- Expandability, you can migrate raid levels, add drives, add drive groups, etc etc while its servicing I/O.
(the G2 model that I blogged about over the summer just got 4 more SAS drives installed it in for an exchange datastore and we were able to do it during the day without any downtime) - Value. OK, this thing isn’t under a grand… but it is one hell of a buy… your not going to find one in too many labs just yet, but I bet we will see a lot more of these getting installed out in the field as people realize just how great of a deal they really are.
- Flexibility, in that I can mix and match drive types, and physical sizes. No it isn’t a Clariion with FAST technology… but if your looking to create some SAS datastores for VMware, and also a bigger SATA datastore for archives of backup data, or documents that aren’t hit as much, this baby can sport both SAS and SATA, and if your have really big data requirements, those 3.5″ 2TB drives should get the job done.
- YOU ALREADY HAVE DRIVES FOR IT! I’m sure that if you’re looking at this SAN, then most likely someone has already talked you into a pilot server with local storage. That’s OK, we will migrate those drive out of that server and into the P2000 so you don’t waste the money you already spend.
- ACU like interface. One thing that makes this unit popular with me is that, while not identical, its very close to the same interface as the HP Array Configuration Utility used to configure Smart Array controllers in HP servers. This makes it super easy to get going if you have HP servers that you’ve configured before.
- Compact design. the fact that I can fit up to 24 drives (SFF model) in a single 2u of rackspace is awesome! Just this single shelf, with drives on the market right now, I could be sitting on 24 x 600GB = 14.4TB of raw SAS space. Plus not only do I have 14.4TB of raw space I have redundant controllers, up to 4-8Gb Fiber Channel interfaces, or 4-10Gig iSCSI. Truly amazing. (Also as a side note if your into SATA space then 12 x 2TB will net you out at 24TB raw in the LFF chassis 🙂
- Speed. Ok it’s not a $500k dollar SAN, but it is still pretty quick for SMB applications. At one point during my tests I was installing 3 windows 2008 R2 servers, 2 Windows 7 VM’s, and then configuring 1 as a domain controller and one as an exchange 2010 server. All of this took place within about 90-120 minutes… Not too bad I don’t think. It was sustaining around 100MB/s and I would say that is as random as you can get. (note my drive config was a 6 Drive RAID 5)
- VAAI Support coming soon. – Although it doesn’t have it right now, it will have VAAI support shortly, this will allow much more efficient data transfer from VMware to the SAN. This is the most exciting news in the P2000 road map in my opinion.
I was looking up pricing on Fiber Channel hba’s the other day and I was pretty disappointed at the prices on them. For an SMB they are way to expensive. Come on guys $1200 bucks for a single port 8Gb HBA?? This is going to limit the number of customers who choose FC over iSCSI, especially now that 10Gb iSCSI is available for the MSA. If a customer does what FC storage I forsee some starting with 4GB HBA’s and not jumping all in with the high price of 8Gb.
Some comments from Nick at SMS.
For an small SMB it is still quite an investment…For a 50 User network it is a must have.I have never configured any HP storage, but I have others (vendors). It took me no time what so ever to figure it (the P2000) out. Really great interface.
It does one thing and one thing well. Provide Storage. No add-ons to bring it to its knees or provide other compatibility issues.
In closing I would like to thank the guys at HP for giving me a chance to put the P2000 G3 through the paces. I don’t know of too many manufacturers who would send something worth this much to someone to play with, so hopefully I draw enough attention with the articles to sell a few units for them.
Update: I just received hardware for a customer upgrade: P2000 SAS SAN and a bunch of drives!
Justin,
We are plannig to buy a P2000.
We have 2 HP DL380 G7 (VMWARE) servers with 8 VM (Oracle, Exchange,…) and about 80 users.
I’m not sure to need to connect other servers on the P2000.
What do you think of 10Gb iSCSI vs SAS & FC interfaces (performance / price) ?
thanks.
All three are great options. Most people that i had worked with werent 10gig ready yet so that is really the only reason we stayed away from those.
As for FC or SAS… i would just ask yourself if you see your company growing ? If so i would probably go with FC, because you can buy switches that could be reused if another SAN is added later, or if you have a FC tape drive, etc etc/
SAS is nice for the budget friendly…. HBA’s are only a couple hundred bucks, where as FC 8GB dual port HBA’s are probably still closer to 1000-1500
Thank you for your fast reply.
I’ve heard that FC is more CPU friendly than 10Gb iSCSI, is it true ?
We are setting up a multi-site information system with 8 HP DL380 G8 servers and 2 HP P2000G3 storage devices using 2 HP 8/24 fiber swithes. The sites are 6 miles apart. All connections are fibre channel 8 GB. Each SAN has 3 virtual drives, 1 each RAID 5, RAID 6 and RAID 10, and 4 logical volumes. Hosts are assigned to each volues as either rad/write or no access so that each server can only see what I want it to see. This use to be called Selective Presentation but is no where near as work intensive to administer.
Each site has 3 of the servers running ESXi 5.1 and a total of six virtual guests. I built 1 Windows 2008R2 server and cloned it 5 times for the servers at one site. Then, from home (DSL), I VPN’ed into my work desktop and in about 45 minutes cloned all 6 of those virtual machines across the 6 mile fiber to the other P2000G3 and the three VM hosts there.
These P2000G3s are a huge step up from our older MSA2000G1s, and while the G3s are our Ravens, our MSA1000s are not only not in the same in the same ball-park, they are playing hacky-sack.
Do I love the P2000G3, you betcha.
(For direct servers-to-P2000G3 connections, I suspect iSCSI would be fine.)
Justin, these units are still working great for us, and we bought them after your recommendation – thanks!
In my latest installation we had to expand the storage of a P2000 to add 24TB of usable storage.. We bought 2 additional LFF enclosures and filled them with 12x 2TB HP SAS 7200rpm drives each
Assuming 80 iops per 7200rpm drive , and raid10, do you think the two new enclosures (24 drives) can support around 70-80 Windows Server VM’s with average i/o needs? (ie file servers, active directory, etc. no databases or anything too demanding.. im seeing around 30 iops per VM)
I have 4 DL380 G7 hosts connected to this SAS P2000 G3 (2 connections per host as in your post) .. Each host has 192GB of RAM, so total of 768GB of RAM in the Cluster (ESXi 5.1)
Hey George! Thanks for the update.
Its hard to say exactly but you basically added 2400 IOps so if you have 100 servers you could average about 100 IOps each.
If you find that things are slow you should take a look at PernixData’s FVP product. It allows you to use SSD’s or PCIe Flash as a read/write cache for any type of SAN.
I am new to these P2000 and wondering if you can answer below.. This question may not make sense..Say if I want to move disks part of this P2000 which are storing info for a blade(host) to a completely new P2000 and moving the same blade(host) to a different setup (from one c7000 to another c7000).
Can this be done in a easy way?
theoretically the controller in the new frame should be able to read the meta data from the drives out of the old frame. However I would contact HP for a proceedure on doing it.
My guess is that both frames would need to be shutdown… all drives removed from the old frame… placed into the new frame, and then booted up.
If you try to pull the drives from a running frame it will just assume that they are failing and drop them from the raid set.
If you try to insert them into the new frame while it is running it wont be able to read the metadata and find all the drives and it will just think they are unused disks.
At least that was my experience when i had the demo unit i was playing with.