RAID10: Do you really need it?

This past week I installed an EMC VNXe 3100, the install went well and the new Unisphere Interface made it a snap to get the VMware ESXi boxes added. One of the requirements was to have some RAID10 space and make the rest RAID5, and because of the way the VNXe creates storage pools and the number of disks that I had to work with I created a 6 drive RAID10 pool (the minimum # of drives for RAID10) and a 15 drive RAID5 pool.

Then I got to thinking… would 6 drives in RAID10 really out perform 15 drives in RAID5?

Let me back up for a minute and explain why the requirement for RAID10 is there. Before the migration to virtualization, the customer had in place a SQL cluster that used an older SAN for shared storage. The goal with the new equipment was to create a high IOps LUN that we could migrate that SQL data to and still maintain good performance… which is why RAID10 was thought to be the best option.

So now that the hardware is installed and we can test some real-world numbers I thought we might as well see if giving up all that extra drive space for RAID10 was really going to get us more IOps.

First here is the RAID5 performance, this is on the VNXe3100 with a 15 drive RAID5 storage pool divided up into several VMware datastores. The IOMeter config that was used was downloaded from the VMKTree.org site and is the same one that I’ve used for the other SAN testing that I’ve done.

Note: All drives are 300GB 15K RPM 3.5″ Drives.

And here is the RAID10 performance with 6 drives

Conclusion

So after seeing these results, I wonder if RAID10 storage is really needed? Obviously, if we had the exact same spindle count for RAID5 as RAID10 then RAID10 would have been faster. But for an SMB situation like this I think that we would be better off removing the RAID10 pool, and adding 5 more drives into the RAID5 pool. Realistically we would be able to achieve higher performance with a 20 drive RAID5 then a 6 drive RAID10. Plus we can make more space used for future needs.

There are situations that I can think of where RAID10 would still be required even if RAID5 performance is just as good, and that is when a software vendor requires it… some even won’t support their application unless its on RAID10, so the bottom line checks your software manuals… make sure RAID10 isn’t mandatory before switching everything to RAID5.

Loading

Share This Post

12 Responses to "RAID10: Do you really need it?"

  1. If only interested in throughput, this makes sense. However, when dealing with 20 drives and only having protection against one drive failure you will want to consider having a hot spare..or 2!

  2. Well two things I see with your test. It is setup as a primary read test moving from random to sequential.
    R1/0 really shows its benefits for the Sequential Write (host to disks) and a reduction in overhead. In some way your test shows this as the processor utilization is lower on certain workloads of your test.

    R5 is the price performance leader of Random Reads but a heavy write workload on the R5 disks will cause overhead on the SP’s for the Parity “Penalty”.

    I’d love to see the same disks quantities 5+1 R5 and 3+3 R10 in a test of 80/20 R/W and 20/80 R/W. That may show a greater need where a R5 vs R10 choice should be made; such as Exchange or SQL tranasction logging luns (R10) or VDI boot volumes (R5)

  3. Justin,

    Are you running NFS or iSCSI on this box? The Max throughput numbers look kind of low for both R5 and R1. Here is what I am getting with iSCSI with 10 300K drives in two RAID 5 (4+1) groups.

    est name Latency Avg iops Avg MBps cpu load
    Max Throughput-100%Read 18.04 3348 104 13%
    RealLife-60%Rand-65%Read 31.82 1681 13 0%
    Max Throughput-50%Read 18.78 3254 101 12%
    Random-8k-70%Read 30.30 1730 13 0%

    I would think that 5 more drives would result in better numbers.

  4. Nice Article.
    I got same problems with a software producer, based on MSSQL, still thinking in a physical way, asking us for separated raid10 partition for database and another for logs.
    I had hard times explaining how is different in vmware, and I asked them for requests IOPS more than raid architecture, and they had no benchmark for their software…

    I think people needs to switch from raid models to IOPS (read and write separated, as stated in other comments): if as a sysadmin I’m able to guaratee you the IOPS you requested, you as software developer do not need to know on what kind of raid I’m going to run your application.

    Automatic tiering will be definitely the solution for this, but is a feature not so common in san storage (I’ve seen it in compellent and it rocks, but it’s also sooo expensive).

  5. We’ve had to make exactly the same decision on our VNXe (single unit, no additional DAEs) about whether to go for two 4+1 RAID5 pools or change one of those to a RAID10 to ensure performance for our SQL server (the R10 pool will also do Exchange 2010 and a few other VMs with the RAID5 doing file sharing, print server etc)

    Just trying to decide if the better performance is worth the trade-off of buying an extra disk and losing 600GB over the RAID5…

  6. went the VNXe route eh ? I remember us discussing the P2000 as the competitor…. both good boxes… just a little more restrictions on the VNXe

  7. Yup in the end went with it as part of the VAR we chose to work with. Also HP didn’t help themselves with the price they wanted to charge for additional disk shelves which made it more pricey than the VNXe.

    Setting up networking on it with VMWare has been “fun” though, thanks EMC :-/

    Now just have to decide which way is best on the disk pools… RAID10 with faster performance but losing 600GB or 2 x RAID5 with 4.1TB usable but the additional write penalty…

Post Comment