Answering Josh’s EMC VNXe Questions

Josh left a great comment for me on the new VNXe Host and LUN setup post, I felt the questions, and their answers, were important enough for a post of their own. Here they are:

Josh's Comment

Awesome post, but we need more details!

-When/why would someone choose boot from SAN versus either an SD card or mirrored raid ssd?

-Can you compare/contrast the storage capabilities of direct attached fiber channel versus 1Gb Ethernet, 10Gb Ethernet, etc

-I really like this configuration because I think it captures a lot of the small business use cases. Most of the time one host could do the job, but we choose two for fault tolerance. By using direct attached storage (in this case 2 hosts) you don’t have to rely on networking, you don’t have to rely on a FC switch.

-Can you talk more about the new VNXe – can it move data around in the storage pool? Can you have a mix of fast drives and capacity drives and have it shuffle data around?

So here are my answers:

When/why would someone choose boot from SAN versus either an SD card or mirrored raid ssd?

Booting from SAN solves a few problems in my opinion.

  1. It makes things cheaper. On the project I’m working on right now I was able to save about 2k$ by not purchasing local drives for the ESX hosts, it doesn’t seem like much, but when the SAN and 3 new hosts cost the customer under 40k … 2k is a decent amount.
  2. Its more reliable IMO. Don’t get me wrong I have used USB / SD cards many times, and some of them from my earliest projects are still going. But if I can put a 2GB boot lun on a SAN … and the san is under warranty… there is nothing that is going to cause that host not to boot… if a drive does bad just swap it… no Host downtime or reload.

Can you compare/contrast the storage capabilities of direct attached fiber channel versus 1Gb Ethernet, 10Gb Ethernet, etc

Sure can. Fiber Channel is STUPID FAST. sure 10gig ethernet is fast too, but then I would have to configure 10GB switches or at least a few /30 subnets so that each of the SAN ports would know what host it’s talking to. With Direct attach Fiber Channel (or FC-AL in official terms) I just plug in cables… THATS LITERALLY IT.

It can also be argued that 8Gbps fiber channel is just as fast as 10Gbps iSCSI or FCOE. Plus now on the VNXe1600 you can do 16Gbps fiber channel…. It’s a no brainer for smaller shops to direct connect Fiber Channel.

I really like this configuration because I think it captures a lot of the small business use cases. Most of the time one host could do the job, but we choose two for fault tolerance. By using direct attached storage (in this case 2 hosts) you don’t have to rely on networking, you don’t have to rely on a FC switch.

BINGO! eliminate two iSCSI switches from an SMB BOM and you just saved 5k… and took two items off warranty and out of the equation for troubleshooting. I’ve been doing this with the HP MSA2000/P2000 as well as the VNXe series for years. It works great and is super reliable. Plus if a customer ever did need to scale you could just add a switch later. If you go with the VNXe3200 it has 4 FC ports per controller. Which is more than the number of hosts VMware Essentials Plus supports… So I always figured if a customer can afford Enterprise class VMware licensing… they can afford 2 Fiber Channel switches.

Can you talk more about the new VNXe – can it move data around in the storage pool? Can you have a mix of fast drives and capacity drives and have it shuffle data around?

The VNXe 3200 has almost all of the capabilities of its big brother the VNX series. IT can do FAST VP as well as FAST Cache. Drives types as well as RAID types can be mixed in pools. It looks like the VNXe1600 only has FAST Cache support… no FAST VP. But you could still create two pools and manually sort the data. But honestly If you just maxed out the thing on FAST Cache and then put in 10k SAS drives that have a high-capacity you are still going to be so cheap you can ignore NLSAS drives.

Sorry for not going into more detail on the last question, but you would be better off to check the datasheets on those, as I’m just starting to get my hands on the 1600 now.

 

As always let me know if you have anymore questions.

EMC VNXe1600 – Configuring Hosts and LUNs

The VNXe1600 is the block only version of the VNXe3200. For SMB sized VMware environments this is the perfect storage array as it allows the customer to add FAST Cache, it allows them to mix and match drive and raid types, and it is easily expandable if needed.

Recently I was configuring some boot LUNs and a VMware datastore on a brand new VNXe1600 and thought I would share the process. It’s pretty quick and very easy to do, especially if you are doing direct attached Fiber Channel servers (remember this guy only has 2 CNA ports so only two servers can be direct attached without a switch.)

This article doesnt show you the complete storage setup and assumes that Storage Pools have already been created. If you have not already created storage pools, do that first.

The first thing we need to do is make sure that the server we want to present storage to is configured as a host on the system. If you browse to the Initiators page under Hosts you will see what initiators are registered (green check marks) as well as which ones still need registered (yellow exclimation marks). To register a host go back up to the Hosts page and select Hosts, then run the wizard to add a new host.

If you are configuring a new ESXi host to boot from SAN you cannot use the VMware Host wizards to discover the hosts as vCenter will not yet know about them. So you need to use the generic host wizard to add them in. Later on you can add vcenter to the SAN and it will find the ESX hosts just fine.

Adding a new host

21

The first step in the Host Wizard is to give the host a name, this is just a friendly name so you know what your working with. It doesn’t need to be the FQDN.

22

Next select the operating system that closest matches what this host is. (Again if you are booting from SAN and this is an ESXi host, ignore the warning to use the Find ESX hosts option.)

23

Enter the IP address or hostname.

24

If its an iSCSI host you will need to add the iqn information, for my Fiber hosts I skip that step and then select the two initiators that relate to this host.

25

Then on the summary page I click Finish.

26

The wizard will then report its progress as its completes the steps.

27

Creating a LUN for VMware

The VMware datastore wizard can be found under the Storage page. Click VMware and then Add to start the wizard.

 

41

Give the datastore a friendly name, make it the same as you would inside of vCenter for simplicity and easy troubleshooting down the road.

42

Then pick what storage pool you want to get the space from and specify a size and whether to be thin provisioned.

43

If you do not need snapshots because you are protecting your data some other way, turn it off to save storage.

44

Then tell the VNXe which hosts should be able to access the LUN. In most cases all your VMware hosts will need LUN access to the datastores.

45

Click finish on the summary page.

46

Note that for Boot LUN’s you will want to allow only the host using it to boot up to access it. Each host should have its own boot LUN.

28

EMC releases VNXe1600 – Block Only VNXe3200

It looks like there is a new member of the VNXe family, a block only version of the VNXe3200. There are also some hardware differences too:

VNXe3200 has 4 copper RJ-45 ports (per SP) that can be 10Gb iSCSI or 10Gb File (NFS / SMB); VNX1600 has two CNA’s (10Gb iSCSI / 4/8/16Gb Fiber Channel) per SP. If you want 1Gbps Ethernet then you would need to purchase the 4 port eSLIC shown below in green.

VNXe1600 rear

Here is some more detailed information on the VNXe1600 via EMC marketing material:

Spec Sheet

Data Sheet

My take

I think this will make a great addition the line up, while the VNXe3200 was cheap… it wasnt cheap enough in some cases. In fact I was doing some research for a small company a couple of weeks ago where the VNXe3200 would just barely fit in their budget, the VNXe1600 will be perfect, some sites are saying usable configurations as low as 10k.

Right now with a VNXe3200 with just a couple drives and some fast cache you would be around 20k… so if all you need are the block features to hook to your VMware cluster than dropping that by half will be pretty awesome. Up until now that has been an area that only Dell and HP have played in with their MD and MSA lines.

Unfortunately with my moving onto the vendor side, and not working for a reseller anymore it may be some time before I get to check one of these out first hand.

Direct Attach Fiber Channel with the EMC VNXe3200

The demo box that I have from EMC does not have the fiber channel mez card in it, but last week I did get a chance to configure a VNXe3200 with direct attached fiber channel hosts for the first time (customer install). I must say that the process was stupid simple.

Unfortunately I was not smart enough to grab screenshots during the install, but I will try to explain it as best I can without them.

Overall the installation couldn’t have been easier, I plugged in each VMware host to each controller, power up the san, initialized it and provisioned my storage pools just like normal. Then I powered on the VMware hosts and made sure they would see the VNXe’s “0-byte” LUN. Once I seen that I knew I was in pretty good shape. I double checked the initiators tab in Unisphere and sure enough it seen each of the WWN’s from the fiber channel cards in the hosts.

After creating some VMware Datastores in Unisphere It allowed me to give access to each of the VMware hosts, the same as it would if they had been iSCSI attached.

Overall the whole installation took about 4 hours from the time I started unboxing the SAN, until I was migrating data from teh existing Dell MD3000 sas attached array to the new VNXe3200 Fiber Channel attached array. As far as performance, it was being limited by the Dell MD3000, but we were seeing as much as 200-300MB/s.

Definitely a great experience installing this config and look forward to doing it a bunch in the future!

My roadmap for the VNXe series

Disclaimer: I don’t work for EMC and I don’t have any inside information that any other customer or partner wouldn’t have. I also have no more influence over the product direction than any customer or partner would. The thoughts in this post are just my opinions.

Every now and then I get a little arrogant and do a post like this where I take my best stab at what I would do if I were the product manager/Chief Architect of a particular product. Since I’ve worked with the VNXe’s since they first hit the market, and since I’m working on other articles while I have one of the new VNXe3200’s I thought what the hell, let’s do a fictional roadmap of the VNXe series, as well as EMC storage in general.

What the VNXe3200 seems to be

The 3200 is clearly a new chapter in EMC’s book, in more ways than one. First off, all previous VNX / Clariion / VNXe systems have had a copy of MS Windows on them for one reason or another (yes even the VNX2’s that just came out last year). The VNXe3200 however does not, instead it runs the MCx code in user space above a linux kernel. If proven this could be a big step in avoiding royalty fees, as well as simplifying the architecture.

Secondly, the VNXe3200 is the first storage unit with the back-end of a VNX (meaning native block protocols) and a file side which does not require physically separate data movers or control stations…. IE it is “truly” unified; both block and file in the same sheet metal box. So I guess certain competitors are going to have to find something new to say about EMC.

Lastly because it is running MCx code just like its big brother the VNX2’s, EMC has yet again simplified their development responsibilities (remember last year when they merged the VNX and VMAX dev teams? http://www.theregister.co.uk/2013/11/25/emc_reorgs/), so it’s not hard to see that a common code base is developing and could possible be used across all array platforms much like NetApp uses its ONTAP operating environment.

Overall this theoretically means that there really isn’t anything that a VNX can do that a VNXe cannot do. (more on this in the next section)

So what am I getting at?

Well in my opinion, the VNXe3200 is a test bed for what is to come. It is a ridiculously powerful platform that is running enterprise grade code… just with certain features turned off or hidden. If it works well, and EMC can prove that they don’t need external data movers and external control stations there is no reason why this architecture could not be rolled up into the VNX series. In fact I already told them that I think they should have called the VNXe3200 the VNX3200.

After talking with one of the guys at EMC about that name change, he explained that it did actually cross there mind, but when they compared what they were going to allow the VNXe3200 to do versus what their VNX systems are allowed to do; it just made more sense to leave the “e”.

So naturally my follow-up questions were “Well why disable all of these advanced features?” and “Why not allow me to go check a box somewhere in Unisphere that allows me to use a full on “VNX style” version of Unisphere… after all its the same MCx code right?” I obviously didn’t get an answer to any of those questions… it’s almost like they assume us bloggers are like “real” media or something, because product engineering and marketing guys always clam up when you start asking the good questions 🙂 LOL.

Lets put another spin on it…

Why not sell me an “advanced features” license for my “VNXe”3200 that turns it into a “VNX”3200?

Talk about software defined!

In fact you could even use that model in the VNX series too… make it so that the VNX5x00 series systems would only allow certain RAID configs, file or block protocols to be used, and certain settings to be customized; then if you have a SAN administrator, or need a consultant to configure some crazy settings you can add the advanced features license or the file services license. Now, before you start throwing things at your monitor because I’m mentioning more licensing, keep in mind that they wouldn’t necessarily need to charge more for this license… but if you dont need them then maybe the price would go down? Just food for thought mainly.

Now to really blow your mind (maybe), and prove my point from above…

The VNX5200 Unified array has an Intel E5-2600 series quad-core processor (each core runs at 1.2Ghz)  and 16GB of ram in each storage processor. This is what powers the block side of the array. On the file side of the house, each data mover has an Intel 5600 series proc and 6GB of ram  (core count not specified in this doc). So if you have two datamovers you have a total of 12GB of ram there, and system wide you have a total of 44GB of RAM (32GB Block Side, 12GB File Side) and at least 10 cores…. maybe 12 at the most. This is what powers BOTH block and file.

Enter the VNXe3200.

Each SP has 24 GB of ram, and a 2.2Ghz Sandy Bridge quad-core proc, for a total of 48GB of RAM and 8 cores. And if you pay attention to Intel’s marketing at all you know that Sandy Bridge is supposed to just kick the snot out of the 5600 series. So theoretically you have just as much horse power in the VNXe3200 as you do in the VNX5200 (and if you add up the GHz you actually have MORE in the VNXe3200 than the VNX5200). The main difference is that you only get one expansion option on the VNXe, where as on the VNX5200 you have multiple SLIC modules to expand it’s IO capabilities.

Take away

If we were at a bar and I was explaining this we would certainly be several beers deep at this point…. so please remember I am just a rambling idiot with a web server and a blog, I have no access to inside information about EMC’s road maps… so all of this is probably way off! BUT if somehow the stars align and my crystal ball proves right you can say you read it here first. Plus how awesome would it be to buy a VNX at a VNXe price point?

VNXe3200 Managing Storage and Cache

The VNXe3200 is almost identical to both the previous VNXe’s as well as the VNX series of SAN’s in that it uses Unisphere for management. However, the version of Unisphere running on the 3200 is a very different once you take a closer look.

After you navigate away from the dashboard (which is pretty much the same as the older VNXe’s) the changes start to become evident.

The purpose of this post is to just look at managing the disks in the system. There are MANY menus and settings to look at on the 3200, just like any SAN, so the only want to make it manageable to talk about is to break it up. To find articles on other sections use the EMC-> VNXe menu at the top of my blog under the logo. Or at the bottom of this article there will be a related posts section.

Storage Tab

On the Storage tab there are only 4 sections; however this probably the most powerful tab you will use if your doing VMware block storage.

VNXe3200 Storage Tab
VNXe3200 Storage Tab

Storage Configuration is where you will want to start. Inside of this section is where you will configure FAST Cache, Storage Pools, and see what drives are spares.

Storage Configuration SubTab
Storage Configuration SubTab

Fast Cache configuration is very easy, simply click the create button and select which drives you want to use. After creating your FAST Cache you can also click on the Storage Pools icon and configure the disks in your systems into pools.

The main part of the storage pools tab is pretty similar to the older VNXe’s the awesome stuff is in the back end capabilities of what the storage pools can do on the 3200.

General storage pool details
General storage pool details

From the general details tab you can see the mix of drives in the VNXe’s storage pool. In this demo unit it was configured with a mix of 15k SAS drives and SSD drives. So in this pool I have a 4+1 RAID 5 configured on the 600GB drives and a RAID 1 on 2 of the 100GB Flash drives.

Storage pool FAST VP details
Storage pool FAST VP details

One of the things that I always like to keep an eye on is how much data moves around on the different tiers in a production environment. This helps you to know when you need to add more to one tier or which ones are working hard than the others.

Next if we move back up a level, back to the Storage tab, we can take a look at how you actually allocate storage to VMware. This assumes you have already configure your iSCSI or Fiber Channel settings.

VMware Datastores

Even though there is a LUN section, VMware datastores should be configure through the VMware section. The main advantage is simplicity and automation; because doing it through this area will automatically add the lun to your VMware servers after creating the LUN, it also formats the LUN and makes it ready to use.

VMware Datastores section
VMware Datastores section

If you go into the details of one of the datastores you can get a lot of great information; first you can see which SP owns the LUN… as well as change its default ownership… something you could not do on the older VNXe’s.

You can also check out the FAST VP tab and see where the data is in the pool, as well as change the default tiering methods.

VMware datastore FAST VP details
VMware datastore FAST VP details

Other things like host access, snapshots and a list of the virtual machines leveraging this datastore are all available as well.

File Systems

From the Storage tab, if you select File Systems you will be able to create file systems for use by CIFS and NFS servers. setup is pretty much the same as the previous VNXe systems. One thing that I haven’t been able to determine yet though is if you can share a file system via CIFS AND NFS at the same time. On previous VNXe systems you could not, but it would be pretty nice if you could on the 3200… I will look into that more and follow up as I plan to do a post dedicated to file services on the 3200.

 

VNXe 3200 Controller Failover Behavior

Two years ago I posted an article describing how the VNXe3100 (and 3300 and 3150)  had a major drawback in the way that it handled IO during a controller reboot or failure. If you didn’t get a chance to experience this for yourself head over to my older article and take a quick look http://www.jpaul.me/2012/07/vnxe-sp-failover-compared-to-other-solutions/.

Now that you know what the issue was, let’s talk about the new VNXe3200. I received a demo box from the folks at EMC through Chad Sakac’s blog a couple of weeks ago and have finally gotten a chance to play around with it and re-do the same test that I did 2 years ago to the 3200’s predecessor. The results were awesome, but before I go into the details let me just say that I no longer dread when I have to propose a VNXe. I now know first hand that this box is up to the task and should have no problems living up to the reputation of its big brothers (the VNX series).

Why is the 3200 different

Whats new with the VNXe3200
Whats new with the VNXe3200

So the VNXe 3200 borrows the heart of the VNX… it’s MCx code… It then uses that code to provide native iSCSI (not emulated like all previous VNXe’s) as well as Fiber Channel connectivity…. Simply put this thing has big boy block protocols. Because of this there are services running on both SP’s at the same time and no service “reboot” time is needed like the older systems needed when restarting their iSCSI servers.

I’m not going to go into all the other new awesomeness as there will be lots of other posts about this box on the way, so let’s get into what happen during the failover tests.

Background Info

From the VNXe 3200 I have presented two LUNs to my VMware ESXi servers. The virtual machine I will be using to test with (SQLAO1) is on a LUN named “FAST_Pool_02” (and yes the VNXe3200 also has Fully Automated Storage Tiering, just like the VNX series). Here is a screen shot showing that SPA is the owner of this LUN, again, very different from the previous VNXe’s. On those model’s LUN’s were owned by an iSCSI server not a storage processor.

SPA is the owner of LUN "FAST_Pool_02
SPA is the owner of LUN “FAST_Pool_02

So to do the test I decided I would copy 16.2GB of linux ISO files from my VNXe3100 CIFS share to this VM.

Baseline

The first time that I transferred the 16.2GB of ISO files I was getting between 80-90MB/s as reported by windows.

Baseline Transfer Speed
Baseline Transfer Speed

Bouncing SPA

The next thing to do was to kill SPA, to do that I decided that the garage was too far to walk to so I used the SP Reboot feature in the Service System menu.

Rebooting SPA during second transfer
Rebooting SPA during second transfer

By the time that I rebooted the SP I already had the transfer running a second time. In fact it was probably half way through the transfer. When I did that Windows kept right on copying data, buffering it inside of windows for a short time. I attribute this buffering time to VMware and its native multipathing policies, because within a few seconds the buffer started flushing out just as fast as it built up. On the VMware Disk Performance graph for the SQLAO1 VM, you can see that traffic spikes down when the paths switch over, but then immediately spike back up as it starts to use the other paths through SPB.

Short buffering and flushing during SP reboot
Short buffering and flushing during SP reboot
Baseline Transfer in RED, Transfer during SP Reboot in Green, Point of Reboot is pointed to in Blue
Baseline Transfer in RED, Transfer during SP Reboot in Green, Point of Reboot is pointed to in Blue

As you can see the transfer is being limited by my 1Gbps network, because right after the path switch over, VMware dumps all of the data that has been buffering for the last 10ish seconds to the SAN and is able to spike up to 150MBps because of multipathing (BTW I just have 2 x 1Gbps links and IOPS Limit = 1 on the ESXi host).

The total time it took from when I clicked reboot until when the paths were back online and active was about 9 minutes. During that time I grabbed a screenshot of the paths list in VMware, showing that b0 and b1 ports were in use.

Path list during reboot, show SPB as owner
Path list during reboot, show SPB as owner

After the reboot was completed I checked the paths again and it had automatically failed back to SPA as the owning controller.

Path list after reboot showing SPA as owner again
Path list after reboot showing SPA as owner again

 

Takeaway

I’ve only had the box operational a week, maybe two at the most and already I am certainly impressed. EMC has fixed a lot of the major issues that I had with the older VNXe series. In fact I’m not entirely convinced that they should even call this box a VNXe…. but that is another post coming soon.

I see this box being a great addition to my consultant tool kit. Knowing that the VNX5100 is going away at some point, it left a small hole for the customers that needed enterprise SAN functionality in a cost-effective platform. However with the addition of FAST VP and Fast Cache into the VNXe 3200 as well as Fiber Channel and native iSCSI; it’s not hard to figure out why they picked code name KittyHawk… cause this thing does fly compared to its older brothers.

Stay tuned, more VNXe3200 articles coming as I have time to get them posted.

Disclaimer: EMC has provided a VNXe3200 Demo unit for me to perform these tests with. They however have encouraged everyone in the demo program to post the good the bad and the ugly all the same. I am not being compensated for this article or any of the other EMC articles posted on my site. In fact it actually costs me money because I haven’t figured out how to get the power company to sponsor my lab yet 🙂

EMC VNXe 3200; new firmware enables up to 150 drives

On Thursday EMC quietly released a firmware update for the VNXe3200 which allows the system to recognize up to 150 drives. The version number is 3.0.1.3513260 and again it was released July 31, 2014.

new code
VNXe3200 Code to support 150 drives

Before this release the VNXe 3200 was limited to just two drive shelves (or 50 drives), however with this release customers can now order and install up to 5 more 25 bay DAE’s or 11 more 12 bay DAE’s (or any combination of them… up to 150 drives).

The maximum raw capacity is now 500TB, and support for 900GB, 2TB and 200GB FAST VP drives has also been added.

That is all… short and sweet, but a very important update!

 

Stats Logging on an EMC VNX SAN

From time to time I get asked how to turn on the statistics logging functions on a VNX array so that a customer can see what is going on inside of the box. To go along with that same request I have also included the instructions on how to pull those stats from the array so you can send them to your EMC team or partner.

Turning stats logging on

  1. After Logging into the array click on the systems serial number in the upper left hand area of Unisphere.
    1
  2. Next click on the “System” button in the upper ribbon.
    2
  3. On the System menu select “Monitoring and Alerts”
    3
  4. On the Monitoring and Alerts menu select “Statistics”
    4
  5. Finally in the Settings area on the Statistics page select “Performance Data Logging”. It is the top option on the left side.
    a
  6. A box will popup with several options on it. You will want to make sure that “Periodic Archiving” is checked,  and check the box next to “Stop Automatically After” (and normally I leave this at 7 days, but adjust as needed). Finally make sure to click “Start” before closing this box.
    5
  7. After starting stats just sit back and wait for the time period you selected to go by, after it has passed you can return to the SAN and retrieve the stats archives and send them to your EMC partner or EMC SE as needed.

Retrieving stats archives

To collect the stats from the array follow steps 1 – 4 from the previous section, then proceed with the following steps:

  1. Once you are on the Statistics page, click on “Retrieve Archive” from the Archive Management panel. (4th option down on the right side)
    6
  2. A box will pop up that looks much like a basic file manager. In the “Archives on SP” box you will find the archives of that stats. You can select all of the relevant archive files which will end in .NAR or .NAZ. then select a Save As location. And finally click Retrieve. Make sure to also switch over to SPB and collect the NAR/NAZ’s from that storage processor too!
    7
  3. After clicking retrieve the stats archives will download to your local PC. From here you can select “Done” on the Retrieve Archives box and then create a ZIP file from all of the NAR/NAZ files. They are now ready to go to your EMC Partner or SE Team.

The only other thing required to generate the pretty graphs and heat maps is an SP Collect from one of the SP’s in the system. Typically EMC will have you upload all of this data to their FTP server (they will provide an account). Or if you are working with me I will either webex in and upload the files directly from your machine, or have you send them to my FTP server.

 

Upgrading unified VNX freezes; leaves Unisphere broken

Last week I was upgrading a VNX5300 unified to FLARE 32, since FLARE 31 will be end of support soon, and ran into some issues. This is the first time I had ever had this happen to me, but apparently it’s a know issue.

When running the upgrade from Unisphere Service Manager, all of the prep work goes fine and you can start the upgrade. However about 10-15 minutes into the upgrade you will notice that the progress bar starts progressing. Another sign that you have run into this known bug is to SSH into the primary control station and issue a /nas/sbin/getreason – You will see that the secondary control station is powered off, and you will also have noticed a “This system is being upgraded…. get out” type warning when you logged in.

I gave the system about 3 hours (I was off doing other things) before I called support. The fix isnt pretty, but it does work.

What you will need to do is get the upgrade DVD from support.emc.com and put it into the primary control station.

Next login to CS0 as root

Then run the following:

mount -r /media/cdrom

mount /dev/hdc /celerra/upgrade

cd /celerra/upgrade/EMC/nas

./install_mgr -m upgrade
This will kick the NAS portion of the upgrade off using the CD as the source for the update. After it has completed Unisphere will be back online and you can proceed to upgrade the block side of the array with USM again without issue.