This anatomy of the Cisco UCS Mini post is a little different than the EMC VNX series anatomy post. Instead of using Microsoft Visio to create PDF and jpeg files, I decided to put some time into Visme.co finally. Which as it turns out, is a pretty excellent tool for creating infographics.
So if you are looking for a quick reference sheet on the Cisco UCS Mini system, then you are in the right place.
They were gracious enough to provide me with a premium account while I mess around with their site. Because of their generosity, I was able to create a PDF version which you can download here.
If you’re in the habit of creating presentations, infographics, or anything like that you should check out my review, then head over and look at their offering. http://visme.co
In the infographic, I outline some power requirements in the UCS chassis, for more information check out the spec guide that I pulled the UCS spec page. Specifically, check page 19 and 20.
I also received a couple of messages from Bill Shields to help describe what blades are supported as well as a little more info on the scalability connector.
“The scalability connector supports a 2nd chassis, C-Series, and 3rd party storage arrays like Nimble.” – So what is nice about this is that if you want to do 10g iSCSI but don’t have 10gig switches you can use a QSFP breakout cable and plug it right into a Nimble (or similar) array.
“… I noticed you called out the B22 M3 which is EoS. Don’t know if that is worth footnoting. We also support the M3 version of all the other servers listed. When the next update to UCSM comes out (real soon now), we will be adding the B260 M4.” – Take away here is that blades change often. For the most up to date list, give your reseller a call or check cisco.com.
If you think there is something I should add or correct, please let me know! Thanks for reading.
Have a question you don’t want to share? Just shoot it over, and I’ll email you as soon as possible
This may also apply to the UCS 5108 chassis in general, but as of writing this I haven’t had a chance to check one.
So here is the back of the chassis. And I figured common sense would say keep the two power ports on the right going to the right PDU, and the two ports on the left going to the left PDU… sounds logical right ?
Well it turns out that the power supplies are labeled 4, 3, 2, 1 … meaning the one on the left is PSU 4, and the one on the right is PSU 1. So when the chassis figures out that it doesnt need all 4 PSU’s online it will start putting PSU’s in power save mode from left to right. So PSU 4 is the first to go into power save mode, then PSU 3. (Note that by default with an N+1 redundancy policy there will always be at least two online)
If you have all 8 blades, or in my case 6 on this last install, powered up this isn’t such a big deal because the chassis lights up at least 3 of the 4 PSU’s. So if you cable the power like i mentioned before you have two feeds coming from PDU A and one from PDU B. HOWEVER if the chassis puts both PSU 4 and PSU 3 into power save mode you are effectively drawing power only from PDU A. But what if you lose PDU A???? Will the other two PSU’s come back online fast enough to prevent power loss to the chassis ?
I wasn’t sure, so I went with the safe bet. I cabled the PSU’s circled in blue to PDU A and the ones in red to PDU B. this way there will always be at least one PSU on each PDU.
Next time I have some extra time to mess around I will do some real testing to see if the chassis goes offline during a PDU failure if cabled the first way.
Also, I guess that I should note that the reason this concerns me more on the mini than a normal 6200 series FI setup is because the FI’s are in the chassis. So if you lose power and those FI’s go offline you would probably be down for a decent amount of time while the FI’s boot back up and then the servers power on, and traffic forwarding resumes.
This week I was able to play around with a new toy, a Cisco UCS Mini. I took some screenshots of the process of how to get it up and running after you put the power to it. Not too many people will probably see this part unless you are on the consultant side… or choose not to have someone do the initial configuration, but I thought I would share anyhow.
Before we go too far, make sure you not only have power connected but also at a minimum you will need the 1Gbps management interfaces connected to a switch as well. At this point in time you really don’t need the 4 universal ports connected to anything, but you certainly can connect them to the switches if you want to.
To get started we have to plug a standard Cisco console cable into one of the fabric interconnects in the back of the chassis. Then using putting you need to tell it that you want to do the initial config via the console or GUI. I chose GUI this time just for the heck of it.
Here is the initial information you need to enter to get the GUI config mode up and running.
As it states you can now open a web browser and start the config process. Choose the Express Setup button.
On the first fabric interconnect to be setup you will want to select Initial Setup and then click Submit.
Next you would want to select Enable clustering, and then enter the ip information for this fabric interconnect. Please note that in total you will still need 3 IP addresses just like a traditional UCS chassis. One for a virtual “floating” ip and one ip for each physical fabric interconnect in the chassis.
Once you have that information entered and submitted you can switch the console cable over to the other fabric interconnect and tell it you want to do GUI config, and what the IP you want on it.
After that we are ready to launch another browser window, this time to the other FI. As you can see by the time all this happens it can already detect that it isn’t the only FI on the network. This is required so that it can pull the config from the other FI. Select Enable Clustering, and that this FI is Fabric B then enter the same admin password that you entered on the other FI.
Next you will need to enter Fabric Interconnect B’s management IP address. after clicking submit FI-B will go out and retrieve all the other settings from FI-A and apply them. Then you are ready to use UCS Manager and finish configuring your new UCS blades.
And there you have it, you now have a “Launch UCS Manager” button instead of an express setup button. Now you can login and start to configure your ports and your templates. I will go through that in a separate post.
Also I have been asked a couple of times about the power draw of the Mini. And on this one the power draw was right about 1000-1200watts with 6 blades running. On 208v power it was drawing right around 5-6 amps
One other note. If you are going to put fiber channel in to this box i was surprised to learn that the assignment of FC ports is the exact opposite of the 6324’s bigger brother. Instead of your fiber channel ports being the highest numbered ports… they are the lowest numbered ports. So make sure to plug fiber channel SFP’s into ports 0 and 1, and your ethernet SFP’s or twinax into ports 3 and 4 on the 6324 fabric interconnect
So I was doing some BOM’s for a customer project today and thought I’d share more information on what UCS Mini bundles are out there right now. As well as what your options are to add to those bundles.
Right now Cisco has three “Smart Play” bundles to get started with the new UCS Mini, they include a “Value Plus”, “Entry Plus”, and Entry” bundle. Basically if you want the cheapest option you would want to go with the Entry bundle, and if you wanted to go all out go with the Value Plus.
Chassis/Fabric Interconnect Components
All bundles come with the same chassis, the same 6324 FI’s, and the same SFP modules for those FI’s. The chassis sub-bundle includes the following components:
4 – Power Cords (for US NEMA they send L6-20 twist lock cords)
2 – 6324 Fabric Interconnects
4 – GLC-T 1Gbps UTP SFP modules (2 per FI)
4- 10Gbps SR SFP modules (2 per FI)
1 – UCS Central Domain License (a base license of UCS Central is still required if you want to use it)
4 – 208-240v Power Supplies
8 – UCS 5108 Fan Modules
The biggest thing to note here is that there are NO Fiber Channel SFP modules that come with the bundle by default. For Fiber Channel support you will need to add DS-SFP-FC8G-SW=, these are the Cisco Fiber Channel SFP modules. You will want 4 per chassis (2 per FI) for a proper setup. If you are going to use Ethernet based storage (iSCSI, NFS, FCoE) you may want to ask your partner to provide 4 additional 10GB SR modules so that you can either link directly to your storage (using the FI appliance port mode) or simply to run 4-10Gbps links to your upstream switch.
The difference between the bundles is in the blades they contain. Each bundle has the same base B200 M3 blades, however they have different CPU and Memory combinations. I put together a quick chart to show the differences.
The configs are pretty straight forward, basically you decide with CPU is going to provide the proper number of cores and from there you can upgrade the blades to meet your exact needs.
Personally I think that Cisco has made one mistake with their Entry Plus bundle; and that is the bundle of blades it has in it. A lot of customers that I can see being interested in the UCS Mini may also be looking at just going with VMware Essentials Plus, which has a 3 physical server limit. so customers may wonder what they are going to do with that extra blade. So if it were me… I would revise the Entry Plus bundle to include 3 blades, and let customers decide if they want to add more or not.
With that being said, lets take a look at what you need to know to order additional blades for your new UCS Mini.
Adding additional blades is pretty easy… you just need one part number.
Here is the summary of that post in relation to the UCS Mini though.
Because the 6324’s act much like 2204 IO Modules (based on how traces are connected from the Mez slot and VIC1240 LOM slot), you will only have 10Gbps of converged bandwidth PER FABRIC to each of the blades in a UCS Mini. So that means if you are using 8Gbps Fiber Channel and are assigning 2x10Gbps virtual nics to VMware (1 vHBA and 1x10Gbps Ethernet per fabric) you are effectively over committing by almost 2:1. So if you intend to have a lot of storage and or ethernet traffic you will certainly want to add the VIC1240 expansion card to your blades.
I’m not sure why I was looking for it, but I was trying to see if the 6324’s support the VIC1240 Expansion module. Basically this module goes into a blade that has a VIC1240 (which can provide 2 x 10Gbps lanes to each FI) and it upgrades it to provide an additional 2 x 10Gbps lanes to each FI…. at least in a normal UCS FI environment.
What I noticed is that the 6324 only has 16 x 10Gbps lanes to the blades (listed as “Server ports” in the below table), this means that no matter if you have 1 blade or 8 blades… they have to share 16 lanes. So if you plan to have more than 4 blades you will not be able to use the VIC1240 expansion module because the FI’s will run out of Server ports.
To help explain this here is what the difference looks like when you are using normal UCS with FI’s and fabric extenders.
And here is what it typically looks like with the VIC1240 expansion module.
So as you can see if you were to populate all 8 slots and have the expansion modules on each blade you would need a total of 64 server ports on your FI’s (32 per side)…. yet the 6324 only supports 16.
NOW before you rule out the UCS Mini…. let me add that I personally have not ran into an issue with not having the expansion module yet…. you will still have 40Gbps into each blade with is enough for 10Gbps of storage traffic per fabric and 10Gbps of Ethernet traffic per side, but still something to keep in mind if you are running something crazy (which would normally mean I would just use 6248 FI’s and a traditional UCS config).
If you get stuck or just want some design help drop me an email to [email protected] or send me a message on twitter.
If you have been considering Cisco UCS blades for your upcoming server refresh, but know that you won’t ever need more than a single chassis of blades… then go grab some coffee or a red bull and read on!
The UCS “Mini” as it’s being called, is the same in every way to the tried and true UCS platform… same blades…. same chassis… same UCS Manager software. There are only two (main) differences
the Fabric Interconnects – up until now the FI’s have resembled Nexus switches… they are 1u each… have LOTS of universal ports on them… and make a shit load of noise. (pardon my french)
the IO modules – on a traditional UCS design each chassis receives a pair of IO modules that are then connected to the traditional fabric interconnects
With the UCS Mini Cisco has collapsed the Fabric Interconnect into the IO Module form factor. So instead of inserting IO Modules into the chassis you insert the Fabric Interconnects into it and tada… UCS Mini. Why? well some customers just don’t need to be able to scale to 100’s of blades… but they do want the goodness of UCS profiles and unified connectivity.
Cisco 6234 Fabric Interconnect
So here it is, the device that makes the UCS Mini… the Mini.
As you can see there are not a lot of ports on this guy. But for a maximum of 8 blades (and up to 7 rack mount servers) you really don’t need a lot. In fact I just posted a UCS Mini Design post today too, so check it out… it explains how I would be using the ports on the new 6324.
Cisco has also released a bunch of new documentation on what the limits and configuration details are on the new 6324. Make sure to review these before finalizing your designs though, as there are more strict limitations on certain features.
Just for reference here is a high level picture from the datasheet of what the complete solution can look like:
Other new stuff on UCS Mini
Before I forget there are also a few other new things to talk about on the UCS Mini, the biggest one (at least in my opinion) is that you can now order the 5108 chassis (when purchasing for the UCS mini only!) with 110V power supplies…. this is huge for SMB’s who dont have 4x20Amp C19 sockets just sitting around. This is achieved through better dynamic power capping capabilities as well as staggering the boot of blades.
USB firmware upgrades are also now a reality for the UCS Mini, and there is one other port on the 6324 that I haven’t mentioned yet, and that is the 40Gig QSFP port… or the “scalability port”… this port is what you can use to attach UCS C series systems to the 6324 fabric interconnects or if you have some bad ass switch with 40Gbps QSFP ports then you are all set!
I was introduced to Cisco servers with their C200 M1 box pretty shortly after it came out… Speaking of which March 16, 2009 is the day Cisco announced the UCS platform. So I’m sure it was sometime late 2009 that we got a couple boxes. Anyhow, I wasn’t really impressed as the box felt more like a white box than an enterprise server. So looking back I can see how UCS has made HUGE strides, and now that I have many UCS blade installs under my belt I am a believer in what they are doing with blades. Their rack-mounts… well they certainly have a place, but aren’t always the sexiest girl at the party.
Anyhow, if you follow any of the tech news outlets you will probably already know that Cisco has claimed the top spot on the North American blade server sales (based on revenue).
I found the following pretty graph on the Cisco site in which they uses IDC data to present the numbers from third quarter 2009 (when they started selling UCS) to the new first quarter 2014 results. Effectively they have went from no market share to 40% in 5 years… impressive.
For me I have some mixed feelings on the results. Don’t get me wrong I am a HUGE fan of UCS, I believe in the concept and trust the platform 100%. Plus if I had to buy servers for myself/company I would pick UCS hands down. HOWEVER, I have a hard time with these numbers because they are based on revenue. I would be much more comfortable if the numbers were based on actual number of servers shipped. Why?
The answer (IMO) is pretty simple… Cisco list price is CRAZY compared to what the servers actually get sold for. While I cannot speak for IBM or Dell or any of the other vendors, I do know that HP generally isn’t giving nearly as much of a discount off list price as Cisco does.
So while I don’t doubt that they are growing like crazy, and I don’t doubt that they are probably number 1… I have to wonder how big the gap really is if we compared actual number of servers shipped from all vendors.
In closing, I just want to say congrats to Cisco! Even if they weren’t able to claim the number 1 spot I would still thank them tremendously for an amazing product. It’s certainly not too often that a new product becomes such a game changer in such short time.
Cisco added a really cool feature to their servers a while ago called Flex Flash. Basically it allows you to use an SD card (and even in a RAID1 config) for your hypervisor as well as some other utility partitions. The problem is that with certain firmware versions ESXi will lose connection to the SD card controller. Your host wont go down (remember ESXi runs from RAM), but it will thrown an error.
Luckily Cisco Released a firmware patch in October (one week to the day after I completed the install where these screenshots came from LOL), that is supposed to fix the issue.
The firmware version that fixes the issues is 1.5.(3d)
Navigate over to Cisco.com and under support type UCS (if you wait a second it will list the options of which UCS server there are to pick from. In the list click on your host model, this will take you to a page where you can click Server Firmware, which will take you to the page where you can download the host upgrade utility ISO file. Make sure to get the correct ISO for your host too, as they have different ISO’s for each model.
This file is a bootable linux cd that contains all of the firmware for your host. If you are remote, or just too lazy to burn the ISO and walk to the server room, load up the CIMC interface and launch the KVM console.
Next go to the Media tab and attach the ISO you just downloaded and click the “Map” check box next to it. Then reboot your server. When you see the BIOS screen press F6 to enter the boot menu. From there select the vKVM DVD option.
After selecting the vKVM DVD give it some time to boot up the HUU, once its booted it will say that it is copying Firmware and Tools, this process will take a while if you are remote using the CIMC. But eventually you will need to click I agree on the EULA and then you will see the update manager screen where you can simply click “Update All” to apply all new firmware to the server.
Once it is complete you can reboot the host and put it back into production, and the FlexFlash “lost path” errors should be fixed…
If you are just looking for screenshots related to this error, or related to what FlexFlash looks like in general check out the gallery below.