One of my upcoming projects is to install a new EMC Clariion AX4 iSCSI SAN for a customer who currently has an aging AX4 that only has SATA storage. The new SAN has 4 – 300GB drives for the vault drives, and 5 – 600 GB 15k SAS drives. While this isn’t completely apples to apples (my loaner P2000 only has 146Gb 10k drives) but its as close as I can get with my current budget (which is $0) haha.
Anyhow the Clariion AX4 is pretty easy to setup, it took about 30 minutes to rack and stack it and its redundant power supply; and only like 5 minutes to cable it… much different then its Celerra sister that I setup a few months ago.
The test environment is my VMware lab which consists of an ML370 G5 with 22GB of ram which will provide the CPU/RAM resources and then we are going to hook both an EMC Clariion AX4 and an HP P2000 SAN via iSCSI to it. The iSCSIwill be multipath from the server to the SAN with 2 – 1Gb NIC’s from the server to switch and 2 – 1Gb ports per SAN controller to the switch. Because we are just using the VMware native multipathing and not EMC Powerpath we will basically be limited to 2Gb of throughput (theoretical).
So first up lets see whats in the AX4:
|AX4-5I||2U Dual SP DPE ISCSI Front End W 1U SPS||7185|
|V-AX4530015K||4 – 300 GB 15K 3GB SAS Disk Drive||2670|
|AX-SS15-600||Qty 6 – 600GB 15K 3GB SAS Disk Drive||7620|
|AX4-5SPS||Second SPS Optional||690|
|AX4-5CTO||Factory Config Services AX4-5 DPE / DAE||25|
For those of us who don’t speak distribution… we have:
- 2u shelf which has dual iscsi controllers and a 1u standby power supply
- 4 – 300GB 15k rpm SAS drives for the “Vault”
- 6 – 600GB 15k rpm SAS drives
- additional standby power supply module (this means one for each controller)
- Factory configuration (ie, they put the drives in the shelf and load the OS)
Setup is pretty much identical to the P2000, give it some management IP addresses, then give it some iSCSI host addresses, then configure a virtual disk, and a logical volume and present it to the initiators. Clearly a nutshell howto, but this is a performance post, not a setup post.
Compared to the P2000 the only two differences between the two for our testing purposes is that the EMC has 15k drives instead of 10k, and they are 600 GB instead of 146GB. Other then that I have tried to keep this as close as possible.
I have based this testing off of the IOMeter configuration files on VMKTree, Here are the results for the EMC:
And just for fun I took a look at what the Fiber Channel interface could provide, even though I only have a 4Gb HBA:
To make things really easy to see I used my advanced Excel skills and make some bar graphs.
To perform these tests I created a Windows 2008 R2 virtual machine on my ML370 G5 server and installed IOmeter on it. Then I tested the first SAN, then I storage vMotioned the virtual machine to the other SAN, and left it settle for a little while, then ran the same test again on that san.
My conclusion is that it would be pretty hard to decide which way to go. I think that if the HP P2000 G3 had 600GB 15k RPM drives it would be faster in every category tested, but as configured it is still right on par with the EMC, except when using Fiber Channel, in which case the P2000 wins (No surprise though because of how much superior FC is). If I were evaluation both of these SAN’s to meet a basic iSCSI storage need for my business, I think that I would buy strictly based on price and things like warranty coverage and whatever other stuff I could squeeze out of my sales guy.
Two other things that I would consider is 1.) how many esxi hosts do I have (if less then 4, why not go with the P2000 G3 SAS and save some $$$), and 2.) what kind of deals will EMC offer on the VNXe series that just came out? My only reservation on the VNXe is that the P2000 and Clariion is simple iSCSI block storage… that’s all it does… and they both do it well… so do I really want to add in NAS features.
Just my 2 cents.