Stats Logging on an EMC VNX SAN

From time to time I get asked how to turn on the statistics logging functions on a VNX array so that a customer can see what is going on inside of the box. To go along with that same request I have also included the instructions on how to pull those stats from the array so you can send them to your EMC team or partner.

Turning stats logging on

  1. After Logging into the array click on the systems serial number in the upper left hand area of Unisphere.
    1
  2. Next click on the “System” button in the upper ribbon.
    2
  3. On the System menu select “Monitoring and Alerts”
    3
  4. On the Monitoring and Alerts menu select “Statistics”
    4
  5. Finally in the Settings area on the Statistics page select “Performance Data Logging”. It is the top option on the left side.
    a
  6. A box will popup with several options on it. You will want to make sure that “Periodic Archiving” is checked,  and check the box next to “Stop Automatically After” (and normally I leave this at 7 days, but adjust as needed). Finally make sure to click “Start” before closing this box.
    5
  7. After starting stats just sit back and wait for the time period you selected to go by, after it has passed you can return to the SAN and retrieve the stats archives and send them to your EMC partner or EMC SE as needed.

Retrieving stats archives

To collect the stats from the array follow steps 1 – 4 from the previous section, then proceed with the following steps:

  1. Once you are on the Statistics page, click on “Retrieve Archive” from the Archive Management panel. (4th option down on the right side)
    6
  2. A box will pop up that looks much like a basic file manager. In the “Archives on SP” box you will find the archives of that stats. You can select all of the relevant archive files which will end in .NAR or .NAZ. then select a Save As location. And finally click Retrieve. Make sure to also switch over to SPB and collect the NAR/NAZ’s from that storage processor too!
    7
  3. After clicking retrieve the stats archives will download to your local PC. From here you can select “Done” on the Retrieve Archives box and then create a ZIP file from all of the NAR/NAZ files. They are now ready to go to your EMC Partner or SE Team.

The only other thing required to generate the pretty graphs and heat maps is an SP Collect from one of the SP’s in the system. Typically EMC will have you upload all of this data to their FTP server (they will provide an account). Or if you are working with me I will either webex in and upload the files directly from your machine, or have you send them to my FTP server.

 

Upgrading unified VNX freezes; leaves Unisphere broken

Last week I was upgrading a VNX5300 unified to FLARE 32, since FLARE 31 will be end of support soon, and ran into some issues. This is the first time I had ever had this happen to me, but apparently it’s a know issue.

When running the upgrade from Unisphere Service Manager, all of the prep work goes fine and you can start the upgrade. However about 10-15 minutes into the upgrade you will notice that the progress bar starts progressing. Another sign that you have run into this known bug is to SSH into the primary control station and issue a /nas/sbin/getreason – You will see that the secondary control station is powered off, and you will also have noticed a “This system is being upgraded…. get out” type warning when you logged in.

I gave the system about 3 hours (I was off doing other things) before I called support. The fix isnt pretty, but it does work.

What you will need to do is get the upgrade DVD from support.emc.com and put it into the primary control station.

Next login to CS0 as root

Then run the following:

mount -r /media/cdrom

mount /dev/hdc /celerra/upgrade

cd /celerra/upgrade/EMC/nas

./install_mgr -m upgrade
This will kick the NAS portion of the upgrade off using the CD as the source for the update. After it has completed Unisphere will be back online and you can proceed to upgrade the block side of the array with USM again without issue.

Hands on with the new EMC VNX5400

This week I was onsite with a customer setting up their new virtualization environment and got to play with a couple of the new EMC VNX5400’s. I wont go through all the marketing stuff as there are plenty of places you can go look at that, but let’s take a look at I unboxed.

Looking at the back of the DPE, its pretty easy to see some major differences if you are used to the VNX5300. First instead of on board ports they now use SLIC modules just like the data movers in a unified system do. Also the huge orange knob is new. After some tinkering I found that it is used to remove the top module, which contains the battery backup unit on the left (its the small square box on both SP’s) as well as the power supply and the back-end bus connectors.

Untitled2

Next up is a view of the front. It looks much like a VNX5300/5100 with the 2.5″ drives however instead of a plain metal spacer under the drives we now have fans which sit in front of the CPU modules (ie the brains of the san). Which means that the SP’s now are removed through the front of the unit, not the rear.

Untitled3

Ok now for a closer look at the rear of the “A” side.  (remember for a bigger version just click any of the photos) If you look at the top middle of the photo you see two small square connectors, these are the new back end bus ports. Before the cables that connected the shelves together were the same size on both ends, but now the cable that goes from the DPE to the first shelf on both bus 0 and bus 1 will require a special cable with this smaller end. (See the next two pictures for a close up of this cable). And in the picture directly below you can see this system is configured with only a single 4 port 8Gb Fiber Channel module.

IMG_1686

One view of the new bus cable.

IMG_1667

And another.

IMG_1666

Ok so next up is the power supply and cable for the front face plate light…. if you dont install this the system will fail to be awesome (im kidding of course). But yeah, this little guy gets plugged into the rail (yes the rail holding the DPE into the rack) and feeds power to the front face plate to you get the bad ass blue light seen in the next picture.

IMG_1668IMG_1669

Front face plates after receiving power from above power supply and cable.

Untitled

Lastly I wanted to leave you with some better pictures and stuff, but I could not find any visio stencils, all I could find were some pictures from the Chinese EMC forums. I included them in the gallery below. Stay tuned, I will post performance numbers of this system as well, but in another post.

 

 

Anatomy of an EMC VNX Array

UPDATE: I am starting to update this page with VNX2 (Code Name Rockies) Information. I have only uploaded the components sheet so far, but when I get time to do the others I will update them as well.

I find that many times when I’m talking to customers they are not sure what all makes up a VNX array, and this can complicate troubleshooting over the phone as well as cause confusion on both sides. To help clarify things I have created a series of quick reference sheets showing the components and what they are called. I have also created a couple of sheets that show what cables are involved in the VNX system and where they go. Stay tuned for VNXe sheets.

For the official VNX Spec sheet head over to EMC’s website.

Quick Reference Sheets

Thumbnail Title PDF Link PNG Link
VNX 5100/5300/5500 Components PDF PNG
VNX 5200 - 5800 Components VNX 5200/5400/5600/5800/7600 Components PDF PNG
VNX 5100/5300/5500 SAS and Power Cables

I also did a quick how to video for doing these cables
https://youtu.be/Rydw-kQvMDs

PDF PNG
  VNX 5300/5500 Unified/File Cables PDF PNG
  VNX 5700/7500 Components
(also the VNX 8000 uses the same general layout)
PDF PNG
  VNX 5700/7500 SAS and Power Cables PDF PNG
  VNX 5700/7500 Unified/File Cables PDF PNG

Components and their purpose

Standby Power Supply – SPS – This is a 1u uninterruptible power supply designed to hold the Storage processors during a power failure for long enough to write any data in volatile memory to disk.

Disk Processor Enclosure – DPE (VNX5100/5300/5500/5200/5400/5600/5800/7600 models) – This is the enclosure that contains the storage processors as well as the Vault Drives and a few other drives. It contains all connections related to block level storage protocols including Fiber Channel and iSCSI.

Storage Processor Enclosure – SPE (VNX5700/7500/8000 models) – This is the enclose that contains the storage processors on the larger VNX models. It is in place of the DPE mentioned above.

Storage Processor – SP –  Usually followed by “A” or “B” to denote if which one it is, all VNX systems have 2 storage processors. It is the job of the storage processor to retrieve data from the disk when asked, and to write data to disk when asked. It also handles all RAID operations as well as Read and Write caching. iSCSI and additional Fiber Channel ports are added to the SP’s using UltraFlex modules.

UltraFlex I/O Modules – These are basically PCIe Cards that have been modified for use in a VNX system. They are fitted into a metal enclosure that is then inserted into the back of the Storage Processors or Data Movers, depending on if it is for Block or File use.

Control Station – CS –  Normally preceded by “Primary” or “Secondary” as there are at least 1, but most often 2 control stations per VNX system. It is the job of the control station to handle management of the File or Unified components in a VNX system. Block only VNX arrays do not utilize a control station. However in a Unified or File only system the Control stations run Unisphere and pass any and all management traffic to the rest of the array components.

 Data Mover Enclosure – Blade Enclosure – This enclosure houses the data movers for file and unified VNX arrays.

Data Movers – X-Blades – DM – Data movers  (aka X-Blades) connect to the storage processors over dedicated fiber channel cables and provide File (NFS, pNFS, and CIFS) access to clients. Think of a data mover like a linux system which has SCSI drives in it, it then takes those drives and formats them with a file system and presents them out one or more protocols for client machines to access.

Disk Array Enclosure – DAE – DAE’s come in several different flavors, 2 of which are depicted in the quick reference sheet. One is a 3u – 15 disk enclosure which holds 15 – 3.5″ disk drives; the second is a 2u – 25 disk enclosure which holds 25 – 2.5″ disk drives; and finally the third is a 4u – 60 disk enclosure which holds 60 – 3.5″ drives in a pull out cabinet style enclosure. The third type is the more rare and are not normally used unless rack space is at a premium.

Maximum drives per System

max drives

 

vnx2 drives

 

I plan to update this article soon with the latest information, but I have noticed a LOT of traffic to this page so I thought I would include a quick contact form in case you have more questions or if you just need something clarified here. I think I may start doing this on some of my posts just because I don’t get emails that comments are waiting for me to approve or review. So hopefully this will speed up my ability to respond to any questions you may have.

Have a question or need more info? Let me know.

EMC VNX5300: My First Encounter

I have worked with several EMC SAN’s as well as a slew of SAN’s from other vendors, but I had not yet gotten the opportunity to mess around with a VNX5300 before last week. This SAN is for an upcoming project I will be installing so I figured since I wanted to make sure my Fiber Channel Kung Fu was good to go, I went ahead and racked this guy up in our datacenter.

I started racking gear about 5pm, and by 645pm I had a Virtual Machine running off the SAN! In that (almost) 2 hour time block I was able to rack the SPS, DPE, two DAE’s, and an HP DL360 G6 Server. This particular SAN has 10 – 600GB SAS drives and 26 – 300GB SAS drives in addition to the 4 Vault drives. It uses 8Gb Fiber Channel as its fabric and has one 4 port 8Gb FC module on each controller in addition to the 4 on board ports. I’m also pretty sure there is a V8 engine inside of it somewhere as well.

Anyhow 🙂

After starting everything up I took at look at the Liebert nFinity 10kVA UPS that it was plugged into and the load had went up 15%! Clearly this isn’t a Prius… but lets face it… it is faster then a Prius! VMware ESXi 5.0 was already installed on my DL360 host, so the next step was to go run the deployment wizard from my laptop.

The wizard literally takes 5 minutes, then the controllers reboot, and then your in business. Plus these things come with Unisphere which is light years ahead of the old Clariion and Celerra management tools. The next thing I did was add my ESXi host to Unisphere, to do that I put in its IP and it instantly found which Fiber Channel ports on the SAN the host was plugged in to. (I love the vSphere integration these things have)

I also created two storage pools, one that was RAID 10 and one that was RAID 5, I put a stupid amount of drives in them because we both know I’m going to run IOMeter 🙂 on this guy later. Out of those two storage pools I created 1 LUN in each (I had room to create more, but this is just for testing).

All that was left to do was create a Storage Group so that I could tell it that my host was allowed access to my two LUNs. After doing that I rescanned my Fiber HBA in the DL360 and both LUN’s showed up and I was ready to go.

I have setup a VNXe series SAN before, but this was my first VNX and I must say that I think the VNX was easier then the VNXe! LOL The only thing that took me a couple minutes to figure out was how to provision the Cache memory to either Write or Read, but after I found the area for the setting it was easy to provision.

As I said before I did run a couple IOMeter tests on this guy and I must say that it is stupid fast! But I will wait to post those until I have it onsite with all the hosts setup on it and get some good numbers.

Overall this SAN looks like the nicest one I’ve gotten to mess with yet!