Direct Attach Fiber Channel with the EMC VNXe3200

The demo box that I have from EMC does not have the fiber channel mez card in it, but last week I did get a chance to configure a VNXe3200 with direct attached fiber channel hosts for the first time (customer install). I must say that the process was stupid simple.

Unfortunately I was not smart enough to grab screenshots during the install, but I will try to explain it as best I can without them.

Overall the installation couldn’t have been easier, I plugged in each VMware host to each controller, power up the san, initialized it and provisioned my storage pools just like normal. Then I powered on the VMware hosts and made sure they would see the VNXe’s “0-byte” LUN. Once I seen that I knew I was in pretty good shape. I double checked the initiators tab in Unisphere and sure enough it seen each of the WWN’s from the fiber channel cards in the hosts.

After creating some VMware Datastores in Unisphere It allowed me to give access to each of the VMware hosts, the same as it would if they had been iSCSI attached.

Overall the whole installation took about 4 hours from the time I started unboxing the SAN, until I was migrating data from teh existing Dell MD3000 sas attached array to the new VNXe3200 Fiber Channel attached array. As far as performance, it was being limited by the Dell MD3000, but we were seeing as much as 200-300MB/s.

Definitely a great experience installing this config and look forward to doing it a bunch in the future!

Loading

Share This Post

10 Responses to "Direct Attach Fiber Channel with the EMC VNXe3200"

  1. Did you test vMotion between the two hosts? I have a customer that is about to do the same thing and they were worried about vMotion not working properly.

  2. vmotion is an ethernet based function that transfered the CPU / Memory workload from one host to another. As long as vMotion is setup properly the storage infrastructure type on the back end is of no concern… the only requirement for vsphere 5.5 and older is a shared datastore. 6.0 also requires a shared datastore unless you plan to do a storage vmotion at the same time. Storage vmotion works just fine on direct attached fc storage as well.

    does that help ?

  3. Justin,

    Do you recommend that I set up VNXe-Cifs or iSCSI? My original thought was to use iSCSI and it’s been working for me. However, I keep hearing lots of positive things about SMB 3.0. I believe that my VNXe3200 supports it.

    Does SMB3.0 work in your favor if the File Serving server is virtualized. Does it work the same if I’m using a physical server with MS 2008? I plan to migrate some servers to 2012 but haven’t yet.

    I like all the redundancy that is put on the VNXe3200. This will enable me to update/reboot my file servers without affecting the users.

    Thanks,

    Raul Trujillo

  4. I typically like to keep all my data inside of a VM… its a nice little container which makes it very easy to move around.

    So basically i have unlimited backup and replication options. I can switch from hardware to hardware as i way… no lock in, etc.

    So with that said I can honestly say that I don’t recommend using the file capabilities at all on hardware… not that they arent good… but I prefer to keep it all in a VM.

  5. Hiya,

    We have a new VNXe 1600 SAN and we originally provisioned our hosts with 1Gbps ISCSI. The 1600 comes with FC. If we purchase two FC HBA can we start using the FC and connect it the back of the SAN. Can the SPs have both ISCSI and FC connected at the same time. I’m thinking of setting up the one host first and connect it directly to the SAN then the other.

    Shaun

Post Comment