Ubiquiti’s New Unifi Elite Offering

It looks like I’m a little late to the party regarding Ubiquiti’s latest (beta) announcement around the Unifi series but non-the-less we are going to take a look at what Ubiquiti calls Unifi Elite. Ubiquiti made the announcement mid-December on its community portal here. It looks like beta access is public and available to anyone who has a Ubnt.com account.

(Side note: To the best of my knowledge, none of this information is under NDA, but I do not get press briefings or release dates from Ubiquiti. So, if any of this is not supposed to be public yet, please let me know. I’ve asked several people if I can get added to whatever “press/blogger” info briefings Ubiquiti does, assuming they do something like that, but have never heard back.)

What is Unifi Elite

From what I have read in the beta announcements and what I’ve seen by using the beta controller, Unifi Elite looks like it will be a combination of two new offerings. The first part of the offering, which is in beta now, is a cloud version of their Unifi Controller. The second part is a more enterprise-friendly service and support offering.

With these new additions, it appears that Ubiquiti is looking to start putting the “enterprise stuff” into their enterprise WiFi solution. Don’t get me wrong; there is already a lot of geeky enterprise features in the offering today, but there is more to enterprise IT than product features.

Let’s take a look at why I think this announcement will bring Ubiquiti Unifi to a much larger market as well as increase its deployment size too.

Unifi Elite – Cloud Controller

The heart of the Unifi product line is the Unifi Controller. You have several ways to run a Unifi Controller, but until now all of your options were self-hosted variants:

  • Windows or Linux virtual or physical machine
  • Ubiquiti Cloud Key
  • RaspberryPi based
  • Mac application
  • AWS or other public cloud instance

In all of these controller types, the Unifi code is distributed as an application that you install on a host operating system. You then have to maintain both the Unifi updates as well as the updates to the host operating system; yet another task for the enterprise sysadmin.

With the Unifi Elite controller, you get the same features and functionality as what you can download and run locally, but the main difference is SaaS delivery; Ubiquiti handles hosting, support, maintenance, and troubleshooting.

Maybe this explains why my Ubiquiti Unifi Virtual appliance hasn’t received any “official” love or shout outs?


I get it, supporting a product is a huge undertaking, I work for a vendor myself. By offering enterprise-quality support to customers using the Elite Cloud Controller Ubiquiti will remove a lot of the variables that customer environments introduce and make supporting Unifi a little easier.

Reasons I would consider the Elite Cloud Controller

I found the forum posts about the Unifi Elite Controller while searching for any details I could find on running a controller on AWS. I am currently preparing that blog post, but I almost just deleted the whole thought. Why? Well even the smallest AWS instance t2.micro will cost about $20 bucks a month to run after your free tier access runs out. And, if you have worked with t2.micro instances before, you already know that they aren’t blazing fast and a sizable Unifi deployment might require a larger instance type. From what I have seen on the Unifi Elite beta posts, a stand-alone controller will be $45/month after the beta period. So for not too much more, you have a controller with no documented “maximum devices”, plus Ubiquiti will maintain it! Can’t beat that!

UPDATE: After Ubiquiti took the Elite service GA, I have learned that each device that you connect to the cloud controller also needs to have a maintenance package associated with it. I talk about that service in the next section of this article, however, this means that you can’t just use the cloud controller you also have to pay for the device maintenance which means that this solution will get quite expensive.

Another reason I would consider running a Unifi controller in the cloud is if I had a large deployment over multiple sites. With the Google Chrome Unifi app you can tell your new AP’s where to look for a controller, so deploying AP’s and switches is pretty simple.

  1. plug the device in (let it boot up and get an IP)
  2. use your Chrome-enabled device to run the Unifi App
  3. Specify controller hostname so Unifi device can call to it
  4. Adopt the device on the Unifi controller

Pretty easy, it’s basically one extra step from running a controller onsite, and Ubiquiti has said that direct cloud adoption is coming. Can you say Cisco Meraki clone? For a 10th the price…

Lastly, if $45/month is too much for you to spend on a Unifi Elite Controller, there is a way to get it for free.

Unifi Elite – Premium Service

The second part of the Ubiquiti Unifi Elite offering is what I call Premium Service and Support. This is exactly what is required to get Unifi into the enterprise. From what I’ve read, this will be an annual maintenance contract per device that you would pay Ubiquiti. They would then provide you with faster RMA service, priority support, and warranty.

Here is a quote from UBNT-Brandon:

It will also encompass cloud, upgraded RMA, and upgraded support as well.

For Beta it is cloud only, as we march towards stable release, we will be adding these as well. So you can think of it as ‘UniFi Elite’ services – Cloud, Support, and Warranty.

So essentially it upgrades your “consumer” like warranty and support to something (because we don’t exactly know yet) that looks more like what enterprises are used to.

UPDATE: as stated above, all devices that leverage a cloud controller will have to have a premium service maintenance contract. Device pricing varies, and it can be found on the Ubiquiti website.

Setting up a Unifi Elite Controller

To get started, you login to https://unifi.ubnt.com. Right now you will also need to be a beta user (which you can opt in for in your profile).

Once logged in you should see a “Setup Unifi Elite” button at the top of the page. I would explain the setup wizard that it launches, but honestly, It just asks for your credit card info. Saying it’s easy is an understatement, but when it’s done you will see your new Unifi Controller show up in the inventory.

Ubiquiti Unifi Elite
Unifi Cloud Management showing both the Elite SaaS controller, as well as a traditional “onsite” software controller

Management of the Unifi Elite cloud controller is identical to a regular controller. You log into it and immediately you are prompted to run through the initial setup wizard, just like an on-premises controller.

Provisioning devices

Originally, to make a Unifi device work you needed a controller onsite. Unifi devices will look out on the network for a controller with locally significant DNS hostname (http://unifi:8080 by default) to see if a controller exists. This is why when you login to a local controller after plugging in a new device it will show up and be ready for adoption.

The problem is that there is no way (right now) for a Unifi device to inform (that is the UBNT term for telling a controller a new device is online) a controller in the cloud. So Ubiquiti has created a Chrome application that will let you modify the default inform address and set any IP or name that you want. So for now, if you want to start using a controller in the cloud (or just a different site without DNS) this is the process to get the AP or switch to show up to the controller.

First, download the Chrome app, it’s called “Ubiquiti Device Discovery Tool.” When you run the tool you will see something like

UBNT adoption app
Chrome dicsovery app, with new AP detected.

From here I can click the “Action” button and configure the AP to “inform” whatever controller I want it to. This is where you will enter your Elite Cloud controller’s hostname.

Enter your Unifi Elite hostname


Once you set this URL the device will talk to the controller you specify and from that portal, you can go ahead and adopt it and provision it just like normal.


I love to see the innovation that Ubiquiti continues to bring to market. Sure there are solutions that already accomplish all of this stuff, but not at the price point that Ubiquiti does. Obviously, the market thinks the same way I do because Ubiquiti stock almost doubled in price in 2016 and seems to be starting off pretty strong in 2017 as well.

As for the product, I like the idea of Unifi Elite; I think that if Ubiquiti starts to develop a channel partner program, you would see quite the uptake for this type of product and service in the SMB to commercial space. I think that most VAR’s (aka channel partners) could also develop an MSP model around the Unifi Elite services and actually take the deployment and day to day management even further for the customers that want that hands off, white glove, treatment too.

Personally, I have enough Raspberry PIs and VMware servers sitting around that I cannot see myself using it past the beta period. However, If I had to pay for a cloud key, or fire up an Amazon instance to run a Unifi controller… Well, then I would certainly consider using the Elite service, but at $45/month I doubt that there will be much uptake in the home office user space.



Anatomy of the Cisco UCS Mini

This anatomy of the Cisco UCS Mini post is a little different than the EMC VNX series anatomy post. Instead of using Microsoft Visio to create PDF and jpeg files, I decided to put some time into Visme.co finally. Which as it turns out, is a pretty excellent tool for creating infographics.

So if you are looking for a quick reference sheet on the Cisco UCS Mini system, then you are in the right place.

Click here to check out the infographic.


There is also a jpeg version here.

See below for a PDF.

Big thanks to the folks at Visme.co

They were gracious enough to provide me with a premium account while I mess around with their site. Because of their generosity, I was able to create a PDF version which you can download here.

If you’re in the habit of creating presentations, infographics, or anything like that you should check out my review, then head over and look at their offering. http://visme.co

Additional Details

In the infographic, I outline some power requirements in the UCS chassis, for more information check out the spec guide that I pulled the UCS spec page. Specifically, check page 19 and 20.

I also received a couple of messages from Bill Shields to help describe what blades are supported as well as a little more info on the scalability connector.

“The scalability connector supports a 2nd chassis, C-Series, and 3rd party storage arrays like Nimble.” – So what is nice about this is that if you want to do 10g iSCSI but don’t have 10gig switches you can use a QSFP breakout cable and plug it right into a Nimble (or similar) array.

“… I noticed you called out the B22 M3 which is EoS. Don’t know if that is worth footnoting. We also support the M3 version of all the other servers listed. When the next update to UCSM comes out (real soon now), we will be adding the B260 M4.” – Take away here is that blades change often. For the most up to date list, give your reseller a call or check cisco.com.


If you think there is something I should add or correct, please let me know! Thanks for reading.

Have a question you don’t want to share? Just shoot it over, and I’ll email you as soon as possible

Ubiquiti Unifi Virtual Appliance

I have some Unifi wireless AP‘s at my house and was trying to find a virtual appliance version of the Unifi controller but was unable to. So I went ahead and created one myself. You are welcome to use it, but it does not come with any support or warranty from me. 🙂 It is simply a minimal Ubuntu 16.04 LTS install along with the proper packages to run the Unifi 5.0.7 controller software. It also has the Unifi Controller software pre-installed, so it will boot up and Unifi will be started automatically!

Before you see the dashboard like the screenshot below you will need to walk through the initial config because this appliance has a fresh install of the controller software. If you plan to import a configuration file from an existing controller I would not adopt any AP’s during the initial config, nor would I configure any SSID’s…. those will be imported automatically when you restore the config.


When you fire it up, the credentials are ‘unifi/unifi’, and if you want root access you can sudo with the same password.

By default, it will try to pull DHCP from whatever virtual network it is attached to, but you are welcome to use the normal Ubuntu “interfaces” file to set a static IP.

My deployment

I deployed this appliance for myself and was able to successfully import a backup of the config from my Windows-based controller without any issues. The coolest part was that all I had to do to migrate my AP’s to this new controller was shut down the old controller and import the config to this one! That’s AWESOME!


Now I just need to get some Unifi switches and a router to complete the Unifi Puzzle!

Looking for the UniFi Hardware?

If you haven’t completed your Ubiquiti Unifi hardware deployment, Amazon has great prices on all the UniFi hardware.

UniFi Security Gateway Unifi PoE Switch Unifi Wireless Access Point

OVF Download

I’ll try to keep this up to date as I update my controller with major releases. Please note that automatic Ubuntu security updates are not enabled on this appliance so I would highly recommend that occasionally you install those.

Unifi 5.0.7 – Ubuntu 16.04

Username: unifi Password: unifi

Download Size: 948MB



Answering Josh’s EMC VNXe Questions

Josh left a great comment for me on the new VNXe Host and LUN setup post, I felt the questions, and their answers, were important enough for a post of their own. Here they are:

Josh's Comment

Awesome post, but we need more details!

-When/why would someone choose boot from SAN versus either an SD card or mirrored raid ssd?

-Can you compare/contrast the storage capabilities of direct attached fiber channel versus 1Gb Ethernet, 10Gb Ethernet, etc

-I really like this configuration because I think it captures a lot of the small business use cases. Most of the time one host could do the job, but we choose two for fault tolerance. By using direct attached storage (in this case 2 hosts) you don’t have to rely on networking, you don’t have to rely on a FC switch.

-Can you talk more about the new VNXe – can it move data around in the storage pool? Can you have a mix of fast drives and capacity drives and have it shuffle data around?

So here are my answers:

When/why would someone choose boot from SAN versus either an SD card or mirrored raid ssd?

Booting from SAN solves a few problems in my opinion.

  1. It makes things cheaper. On the project I’m working on right now I was able to save about 2k$ by not purchasing local drives for the ESX hosts, it doesn’t seem like much, but when the SAN and 3 new hosts cost the customer under 40k … 2k is a decent amount.
  2. Its more reliable IMO. Don’t get me wrong I have used USB / SD cards many times, and some of them from my earliest projects are still going. But if I can put a 2GB boot lun on a SAN … and the san is under warranty… there is nothing that is going to cause that host not to boot… if a drive does bad just swap it… no Host downtime or reload.

Can you compare/contrast the storage capabilities of direct attached fiber channel versus 1Gb Ethernet, 10Gb Ethernet, etc

Sure can. Fiber Channel is STUPID FAST. sure 10gig ethernet is fast too, but then I would have to configure 10GB switches or at least a few /30 subnets so that each of the SAN ports would know what host it’s talking to. With Direct attach Fiber Channel (or FC-AL in official terms) I just plug in cables… THATS LITERALLY IT.

It can also be argued that 8Gbps fiber channel is just as fast as 10Gbps iSCSI or FCOE. Plus now on the VNXe1600 you can do 16Gbps fiber channel…. It’s a no brainer for smaller shops to direct connect Fiber Channel.

I really like this configuration because I think it captures a lot of the small business use cases. Most of the time one host could do the job, but we choose two for fault tolerance. By using direct attached storage (in this case 2 hosts) you don’t have to rely on networking, you don’t have to rely on a FC switch.

BINGO! eliminate two iSCSI switches from an SMB BOM and you just saved 5k… and took two items off warranty and out of the equation for troubleshooting. I’ve been doing this with the HP MSA2000/P2000 as well as the VNXe series for years. It works great and is super reliable. Plus if a customer ever did need to scale you could just add a switch later. If you go with the VNXe3200 it has 4 FC ports per controller. Which is more than the number of hosts VMware Essentials Plus supports… So I always figured if a customer can afford Enterprise class VMware licensing… they can afford 2 Fiber Channel switches.

Can you talk more about the new VNXe – can it move data around in the storage pool? Can you have a mix of fast drives and capacity drives and have it shuffle data around?

The VNXe 3200 has almost all of the capabilities of its big brother the VNX series. IT can do FAST VP as well as FAST Cache. Drives types as well as RAID types can be mixed in pools. It looks like the VNXe1600 only has FAST Cache support… no FAST VP. But you could still create two pools and manually sort the data. But honestly If you just maxed out the thing on FAST Cache and then put in 10k SAS drives that have a high-capacity you are still going to be so cheap you can ignore NLSAS drives.

Sorry for not going into more detail on the last question, but you would be better off to check the datasheets on those, as I’m just starting to get my hands on the 1600 now.


As always let me know if you have anymore questions.

EMC VNXe1600 – Configuring Hosts and LUNs

The VNXe1600 is the block only version of the VNXe3200. For SMB sized VMware environments this is the perfect storage array as it allows the customer to add FAST Cache, it allows them to mix and match drive and raid types, and it is easily expandable if needed.

Recently I was configuring some boot LUNs and a VMware datastore on a brand new VNXe1600 and thought I would share the process. It’s pretty quick and very easy to do, especially if you are doing direct attached Fiber Channel servers (remember this guy only has 2 CNA ports so only two servers can be direct attached without a switch.)

This article doesnt show you the complete storage setup and assumes that Storage Pools have already been created. If you have not already created storage pools, do that first.

The first thing we need to do is make sure that the server we want to present storage to is configured as a host on the system. If you browse to the Initiators page under Hosts you will see what initiators are registered (green check marks) as well as which ones still need registered (yellow exclimation marks). To register a host go back up to the Hosts page and select Hosts, then run the wizard to add a new host.

If you are configuring a new ESXi host to boot from SAN you cannot use the VMware Host wizards to discover the hosts as vCenter will not yet know about them. So you need to use the generic host wizard to add them in. Later on you can add vcenter to the SAN and it will find the ESX hosts just fine.

Adding a new host


The first step in the Host Wizard is to give the host a name, this is just a friendly name so you know what your working with. It doesn’t need to be the FQDN.


Next select the operating system that closest matches what this host is. (Again if you are booting from SAN and this is an ESXi host, ignore the warning to use the Find ESX hosts option.)


Enter the IP address or hostname.


If its an iSCSI host you will need to add the iqn information, for my Fiber hosts I skip that step and then select the two initiators that relate to this host.


Then on the summary page I click Finish.


The wizard will then report its progress as its completes the steps.


Creating a LUN for VMware

The VMware datastore wizard can be found under the Storage page. Click VMware and then Add to start the wizard.



Give the datastore a friendly name, make it the same as you would inside of vCenter for simplicity and easy troubleshooting down the road.


Then pick what storage pool you want to get the space from and specify a size and whether to be thin provisioned.


If you do not need snapshots because you are protecting your data some other way, turn it off to save storage.


Then tell the VNXe which hosts should be able to access the LUN. In most cases all your VMware hosts will need LUN access to the datastores.


Click finish on the summary page.


Note that for Boot LUN’s you will want to allow only the host using it to boot up to access it. Each host should have its own boot LUN.


EMC releases VNXe1600 – Block Only VNXe3200

It looks like there is a new member of the VNXe family, a block only version of the VNXe3200. There are also some hardware differences too:

VNXe3200 has 4 copper RJ-45 ports (per SP) that can be 10Gb iSCSI or 10Gb File (NFS / SMB); VNX1600 has two CNA’s (10Gb iSCSI / 4/8/16Gb Fiber Channel) per SP. If you want 1Gbps Ethernet then you would need to purchase the 4 port eSLIC shown below in green.

VNXe1600 rear

Here is some more detailed information on the VNXe1600 via EMC marketing material:

Spec Sheet

Data Sheet

My take

I think this will make a great addition the line up, while the VNXe3200 was cheap… it wasnt cheap enough in some cases. In fact I was doing some research for a small company a couple of weeks ago where the VNXe3200 would just barely fit in their budget, the VNXe1600 will be perfect, some sites are saying usable configurations as low as 10k.

Right now with a VNXe3200 with just a couple drives and some fast cache you would be around 20k… so if all you need are the block features to hook to your VMware cluster than dropping that by half will be pretty awesome. Up until now that has been an area that only Dell and HP have played in with their MD and MSA lines.

Unfortunately with my moving onto the vendor side, and not working for a reseller anymore it may be some time before I get to check one of these out first hand.

Power cable considerations on the UCS Mini

This may also apply to the UCS 5108 chassis in general, but as of writing this I haven’t had a chance to check one.

So here is the back of the chassis. And I figured common sense would say keep the two power ports on the right going to the right PDU, and the two ports on the left going to the left PDU… sounds logical right ?

power clean

Well it turns out that the power supplies are labeled 4, 3, 2, 1 … meaning the one on the left is PSU 4, and the one on the right is PSU 1. So when the chassis figures out that it doesnt need all 4 PSU’s online it will start putting PSU’s in power save mode from left to right. So PSU 4 is the first to go into power save mode, then PSU 3. (Note that by default with an N+1 redundancy policy there will always be at least two online)

If you have all 8 blades, or in my case 6 on this last install, powered up this isn’t such a big deal because the chassis lights up at least 3 of the 4 PSU’s. So if you cable the power like i mentioned before you have two feeds coming from PDU A and one from PDU B. HOWEVER if the chassis puts both PSU 4 and PSU 3 into power save mode you are effectively drawing power only from PDU A. But what if you lose PDU A???? Will the other two PSU’s come back online fast enough to prevent power loss to the chassis ?

I wasn’t sure, so I went with the safe bet. I cabled the PSU’s circled in blue to PDU A and the ones in red to PDU B. this way there will always be at least one PSU on each PDU.


Next time I have some extra time to mess around I will do some real testing to see if the chassis goes offline during a PDU failure if cabled the first way.

Also, I guess that I should note that the reason this concerns me more on the mini than a normal 6200 series FI setup is because the FI’s are in the chassis. So if you lose power and those FI’s go offline you would probably be down for a decent amount of time while the FI’s boot back up and then the servers power on, and traffic forwarding resumes.


Start up of a Cisco UCS Mini

If you are looking for a high level datasheet on the Cisco UCS Mini check out my UCS Mini Anatomy Infographic.


This week I was able to play around with a new toy, a Cisco UCS Mini. I took some screenshots of the process of how to get it up and running after you put the power to it. Not too many people will probably see this part unless you are on the consultant side… or choose not to have someone do the initial configuration, but I thought I would share anyhow.

Before we go too far, make sure you not only have power connected but also at a minimum you will need the 1Gbps management interfaces connected to a switch as well. At this point in time you really don’t need the 4 universal ports connected to anything, but you certainly can connect them to the switches if you want to.

To get started we have to plug a standard Cisco console cable into one of the fabric interconnects in the back of the chassis. Then using putting you need to tell it that you want to do the initial config via the console or GUI. I chose GUI this time just for the heck of it.

Here is the initial information you need to enter to get the GUI config mode up and running.


As it states you can now open a web browser and start the config process. Choose the Express Setup button.


On the first fabric interconnect to be setup you will want to select Initial Setup and then click Submit.


Next you would want to select Enable clustering, and then enter the ip information for this fabric interconnect. Please note that in total you will still need 3 IP addresses just like a traditional UCS chassis. One for a virtual “floating” ip and one ip for each physical fabric interconnect in the chassis.


Once you have that information entered and submitted you can switch the console cable over to the other fabric interconnect and tell it you want to do GUI config, and what the IP you want on it.


After that we are ready to launch another browser window, this time to the other FI. As you can see by the time all this happens it can already detect that it isn’t the only FI on the network. This is required so that it can pull the config from the other FI. Select Enable Clustering, and that this FI is Fabric B then enter the same admin password that you entered on the other FI.


Next you will need to enter Fabric Interconnect B’s management IP address. after clicking submit FI-B will go out and retrieve all the other settings from FI-A and apply them. Then you are ready to use UCS Manager and finish configuring your new UCS blades.


And there you have it, you now have a “Launch UCS Manager” button instead of an express setup button. Now you can login and start to configure your ports and your templates. I will go through that in a separate post.


Also I have been asked a couple of times about the power draw of the Mini. And on this one the power draw was right about 1000-1200watts with 6 blades running. On 208v power it was drawing right around 5-6 amps

One other note. If you are going to put fiber channel in to this box i was surprised to learn that the assignment of FC ports is the exact opposite of the 6324’s bigger brother. Instead of your fiber channel ports being the highest numbered ports… they are the lowest numbered ports. So make sure to plug fiber channel SFP’s into ports 0 and 1, and your ethernet SFP’s or twinax into ports 3 and 4 on the 6324 fabric interconnect

Direct Attach Fiber Channel with the EMC VNXe3200

The demo box that I have from EMC does not have the fiber channel mez card in it, but last week I did get a chance to configure a VNXe3200 with direct attached fiber channel hosts for the first time (customer install). I must say that the process was stupid simple.

Unfortunately I was not smart enough to grab screenshots during the install, but I will try to explain it as best I can without them.

Overall the installation couldn’t have been easier, I plugged in each VMware host to each controller, power up the san, initialized it and provisioned my storage pools just like normal. Then I powered on the VMware hosts and made sure they would see the VNXe’s “0-byte” LUN. Once I seen that I knew I was in pretty good shape. I double checked the initiators tab in Unisphere and sure enough it seen each of the WWN’s from the fiber channel cards in the hosts.

After creating some VMware Datastores in Unisphere It allowed me to give access to each of the VMware hosts, the same as it would if they had been iSCSI attached.

Overall the whole installation took about 4 hours from the time I started unboxing the SAN, until I was migrating data from teh existing Dell MD3000 sas attached array to the new VNXe3200 Fiber Channel attached array. As far as performance, it was being limited by the Dell MD3000, but we were seeing as much as 200-300MB/s.

Definitely a great experience installing this config and look forward to doing it a bunch in the future!

My roadmap for the VNXe series

Disclaimer: I don’t work for EMC and I don’t have any inside information that any other customer or partner wouldn’t have. I also have no more influence over the product direction than any customer or partner would. The thoughts in this post are just my opinions.

Every now and then I get a little arrogant and do a post like this where I take my best stab at what I would do if I were the product manager/Chief Architect of a particular product. Since I’ve worked with the VNXe’s since they first hit the market, and since I’m working on other articles while I have one of the new VNXe3200’s I thought what the hell, let’s do a fictional roadmap of the VNXe series, as well as EMC storage in general.

What the VNXe3200 seems to be

The 3200 is clearly a new chapter in EMC’s book, in more ways than one. First off, all previous VNX / Clariion / VNXe systems have had a copy of MS Windows on them for one reason or another (yes even the VNX2’s that just came out last year). The VNXe3200 however does not, instead it runs the MCx code in user space above a linux kernel. If proven this could be a big step in avoiding royalty fees, as well as simplifying the architecture.

Secondly, the VNXe3200 is the first storage unit with the back-end of a VNX (meaning native block protocols) and a file side which does not require physically separate data movers or control stations…. IE it is “truly” unified; both block and file in the same sheet metal box. So I guess certain competitors are going to have to find something new to say about EMC.

Lastly because it is running MCx code just like its big brother the VNX2’s, EMC has yet again simplified their development responsibilities (remember last year when they merged the VNX and VMAX dev teams? http://www.theregister.co.uk/2013/11/25/emc_reorgs/), so it’s not hard to see that a common code base is developing and could possible be used across all array platforms much like NetApp uses its ONTAP operating environment.

Overall this theoretically means that there really isn’t anything that a VNX can do that a VNXe cannot do. (more on this in the next section)

So what am I getting at?

Well in my opinion, the VNXe3200 is a test bed for what is to come. It is a ridiculously powerful platform that is running enterprise grade code… just with certain features turned off or hidden. If it works well, and EMC can prove that they don’t need external data movers and external control stations there is no reason why this architecture could not be rolled up into the VNX series. In fact I already told them that I think they should have called the VNXe3200 the VNX3200.

After talking with one of the guys at EMC about that name change, he explained that it did actually cross there mind, but when they compared what they were going to allow the VNXe3200 to do versus what their VNX systems are allowed to do; it just made more sense to leave the “e”.

So naturally my follow-up questions were “Well why disable all of these advanced features?” and “Why not allow me to go check a box somewhere in Unisphere that allows me to use a full on “VNX style” version of Unisphere… after all its the same MCx code right?” I obviously didn’t get an answer to any of those questions… it’s almost like they assume us bloggers are like “real” media or something, because product engineering and marketing guys always clam up when you start asking the good questions 🙂 LOL.

Lets put another spin on it…

Why not sell me an “advanced features” license for my “VNXe”3200 that turns it into a “VNX”3200?

Talk about software defined!

In fact you could even use that model in the VNX series too… make it so that the VNX5x00 series systems would only allow certain RAID configs, file or block protocols to be used, and certain settings to be customized; then if you have a SAN administrator, or need a consultant to configure some crazy settings you can add the advanced features license or the file services license. Now, before you start throwing things at your monitor because I’m mentioning more licensing, keep in mind that they wouldn’t necessarily need to charge more for this license… but if you dont need them then maybe the price would go down? Just food for thought mainly.

Now to really blow your mind (maybe), and prove my point from above…

The VNX5200 Unified array has an Intel E5-2600 series quad-core processor (each core runs at 1.2Ghz)  and 16GB of ram in each storage processor. This is what powers the block side of the array. On the file side of the house, each data mover has an Intel 5600 series proc and 6GB of ram  (core count not specified in this doc). So if you have two datamovers you have a total of 12GB of ram there, and system wide you have a total of 44GB of RAM (32GB Block Side, 12GB File Side) and at least 10 cores…. maybe 12 at the most. This is what powers BOTH block and file.

Enter the VNXe3200.

Each SP has 24 GB of ram, and a 2.2Ghz Sandy Bridge quad-core proc, for a total of 48GB of RAM and 8 cores. And if you pay attention to Intel’s marketing at all you know that Sandy Bridge is supposed to just kick the snot out of the 5600 series. So theoretically you have just as much horse power in the VNXe3200 as you do in the VNX5200 (and if you add up the GHz you actually have MORE in the VNXe3200 than the VNX5200). The main difference is that you only get one expansion option on the VNXe, where as on the VNX5200 you have multiple SLIC modules to expand it’s IO capabilities.

Take away

If we were at a bar and I was explaining this we would certainly be several beers deep at this point…. so please remember I am just a rambling idiot with a web server and a blog, I have no access to inside information about EMC’s road maps… so all of this is probably way off! BUT if somehow the stars align and my crystal ball proves right you can say you read it here first. Plus how awesome would it be to buy a VNX at a VNXe price point?