Answering Josh’s EMC VNXe Questions

Josh left a great comment for me on the new VNXe Host and LUN setup post, I felt the questions, and their answers, were important enough for a post of their own. Here they are:

[stextbox id=”black” caption=”Josh’s Comment”]

Awesome post, but we need more details!

-When/why would someone choose boot from SAN versus either an SD card or mirrored raid ssd?

-Can you compare/contrast the storage capabilities of direct attached fiber channel versus 1Gb Ethernet, 10Gb Ethernet, etc

-I really like this configuration because I think it captures a lot of the small business use cases. Most of the time one host could do the job, but we choose two for fault tolerance. By using direct attached storage (in this case 2 hosts) you don’t have to rely on networking, you don’t have to rely on a FC switch.

-Can you talk more about the new VNXe – can it move data around in the storage pool? Can you have a mix of fast drives and capacity drives and have it shuffle data around?

[/stextbox]

So here are my answers:

When/why would someone choose boot from SAN versus either an SD card or mirrored raid ssd?

Booting from SAN solves a few problems in my opinion.

  1. It makes things cheaper. On the project I’m working on right now I was able to save about 2k$ by not purchasing local drives for the ESX hosts, it doesn’t seem like much, but when the SAN and 3 new hosts cost the customer under 40k … 2k is a decent amount.
  2. Its more reliable IMO. Don’t get me wrong I have used USB / SD cards many times, and some of them from my earliest projects are still going. But if I can put a 2GB boot lun on a SAN … and the san is under warranty… there is nothing that is going to cause that host not to boot… if a drive does bad just swap it… no Host downtime or reload.

Can you compare/contrast the storage capabilities of direct attached fiber channel versus 1Gb Ethernet, 10Gb Ethernet, etc

Sure can. Fiber Channel is STUPID FAST. sure 10gig ethernet is fast too, but then I would have to configure 10GB switches or at least a few /30 subnets so that each of the SAN ports would know what host it’s talking to. With Direct attach Fiber Channel (or FC-AL in official terms) I just plug in cables… THATS LITERALLY IT.

It can also be argued that 8Gbps fiber channel is just as fast as 10Gbps iSCSI or FCOE. Plus now on the VNXe1600 you can do 16Gbps fiber channel…. It’s a no brainer for smaller shops to direct connect Fiber Channel.

I really like this configuration because I think it captures a lot of the small business use cases. Most of the time one host could do the job, but we choose two for fault tolerance. By using direct attached storage (in this case 2 hosts) you don’t have to rely on networking, you don’t have to rely on a FC switch.

BINGO! eliminate two iSCSI switches from an SMB BOM and you just saved 5k… and took two items off warranty and out of the equation for troubleshooting. I’ve been doing this with the HP MSA2000/P2000 as well as the VNXe series for years. It works great and is super reliable. Plus if a customer ever did need to scale you could just add a switch later. If you go with the VNXe3200 it has 4 FC ports per controller. Which is more than the number of hosts VMware Essentials Plus supports… So I always figured if a customer can afford Enterprise class VMware licensing… they can afford 2 Fiber Channel switches.

Can you talk more about the new VNXe – can it move data around in the storage pool? Can you have a mix of fast drives and capacity drives and have it shuffle data around?

The VNXe 3200 has almost all of the capabilities of its big brother the VNX series. IT can do FAST VP as well as FAST Cache. Drives types as well as RAID types can be mixed in pools. It looks like the VNXe1600 only has FAST Cache support… no FAST VP. But you could still create two pools and manually sort the data. But honestly If you just maxed out the thing on FAST Cache and then put in 10k SAS drives that have a high-capacity you are still going to be so cheap you can ignore NLSAS drives.

Sorry for not going into more detail on the last question, but you would be better off to check the datasheets on those, as I’m just starting to get my hands on the 1600 now.

 

As always let me know if you have anymore questions.

Loading

Share This Post

One Response to "Answering Josh’s EMC VNXe Questions"

  1. Another great post — this blog is invaluable!

    Just wanted to contribute to the overall design conversation with experience that I’ve been running into.

    If your SMB design includes Cisco UCS mini with the new VNXe line, then you have an incredible little datacenter indeed. I’ve had tremendous success direct-attaching VNXe3200 (or 1600) to the UCS mini chassis using 4 paths of 10GB iSCSI (inexpensive active-optical or active-twinax cables), leaving 4 paths (not to mention the remaining QSFP interface in the 6324 FICs) to uplink to switches (which Cisco supports with 1GB GLC-T transceivers).

    Direct-attached, highly available, high-speed block storage, with no investment in 10GB switching (which as indicated, can be a game changer in an SMB project). 4GB of total Ethernet uplink (2 port-channels per fabric) has proven more than sufficient for the typical 2-3 blade vSphere Essentials deployment. You do have to consider iSCSI subnets and (if you’re booting from SAN) vnic “overlays” here, but don’t have to worry about zoning on the other hand.

Post Comment