MS SQL AlwaysOn Availability Groups, a better way to cluster

I’ve done posts before about how much I hate Microsoft Clusters, mainly because of their shared disk requirements. While VMware does support a virtualized Microsoft cluster with shared disks, the bottom line is that its just a pain in the ass, so when I heard about SQL AAG’s (AlwaysOn Availability Groups) at VMworld the other week I was pretty excited. Sure, they have been out for a while (they came out with SQL 2012) but apparently I live under a rock.

So at a high level thing of an SQL AAG just like you would an Exchange DAG. It allows you to use multiple instances of Windows and a specific application (in this case SQL) to create a highly available service just like you would with an old school Microsoft cluster. But there are actually a lot more benefits.

In the example below from a MS document, it shows how you can use this functionality not only to make SQL highly available at your primary datacenter, but you can also use it to synchronously or asynchronously replicate that data to another SQL server at a DR site.

ag topology

I’m a big fan of using application specific DR technologies, don’t get me wrong Zerto is awesome and Veeam does a good job too (As do most other products out there), but with this technology your going to get a SQL instance that is already running at DR the moment that things go down at your main site… it’s hard to beat that!

Ok so the whats the downside right ? Well it’s all about the money of course! Basically you’re going to get hit with two things… you need more disk space and you might need more licenses. Let me explain.

SQL server allows you to have a passive copy of SQL running, so if you intend to only have a two node cluster and the second copy of the data is 100% passive unless the first fails then you need no extra licenses outside of whatever is required for one server. (Oh BTW you need SQL Enterprise or the AlwaysOn features) If however you want more than one passive copy, or if you want to read from a passive copy (for doing backups, or to just load balance the reads or something) then you have to fully licenses those copies as well.

For more information on Microsoft SQL licensing see here.

On the disk side things are a little more flexible. Obviously you could use physical servers if your environment requires it, you can also make a bunch of virtual machines, but one option you can’t do with Microsoft clusters is put some in the cloud and some on premise. That is probably one of the coolest things about the AlwaysOn feature is that if you have a hybrid cloud it will allow you to take advantage of both sites for a highly available solution.

How to make it happen

The first thing you need to do is install two or more copies of Windows, I used Server 2012 Datacenter for my testing. After Windows is installed make sure to join the machines to your domain and set up the networking properly.

The second step is to install and configure Failover Cluster services. To do that you need to add the Failover Cluster feature to all of your nodes, once it is on your nodes login to one of them and start the Failover Manager.

We need to run the Create Cluster Wizard. Once we start the wizard add all of your nodes to the c luster.


Next allow the wizard to run the validation tests and make sure everything is ok. Then enter the cluster name and IP address.


Then click next through the Create Cluster wizard. One note, I unchecked the “Add all eligible storage to the cluster” option as I don’t want any storage managed by the cluster. You should now have a cluster with your nodes in it, and it should look something like this one.


Once you finish the wizard you can minimize the Failover Manager.

Next install SQL Server 2012 (or in my case 2014, just because I didn’t have 2012 on my laptop). Do a standard installation just like you would normally do, do not do a cluster installation…. just a normal stand alone install. Do this on each node that you want in the AlwaysOn availability group. Note: you will want to install the SQL server service to use a domain login.

After SQL is installed we need to enable AlwaysOn, to do that open up the SQL Configuration manager and click on “SQL Server Services”. In the main part of the page you should see “SQL Server (MSSQLSERVER)” right click on it and select Properties. In the properties dialog box select the “AlwaysOn High Availability” tab. You then need to check the “Enable AlwaysOn Availability Groups” box and then click OK to exit the properties.

enable always on

After enabling AlwaysOn you will need to restart SQL Server, to do that you can go into services or you can simply right click on the same place you did a minute ago and restart it from within the SQL Configuration Manager.

Now inside of SQL Management Studio you should be able to expand out the AlwaysOn section, however there wont be anything in it yet.

aag in management studio

Before we can create an availability group we need something to protect so create a database and make sure the logging mode is set to Full. Also for the nodes to get in sync with each other you will need a shared folder somewhere on your network, if you don’t already have one create one now.

After you have created your database go back down to “AlwaysOn High Availability” and right click, then select “New Availability Group”.

Give the group a name.


Next select the databases that you want to be in this group. You can select one or many databases.


Next you will need to specify which servers are going to be replicas. To do this click the Add Replica button and type the name of the other server(s) you want to replicate to.


Once you have all of your nodes added, make sure to select the proper options as to what you want them to be doing, in most cases you will want them to automatically failover and be synchronous, but remember if you select “Readable Secondary” then technically you need to fully licenses that copy of SQL as it will no longer be considered passive.


Before clicking next select the “Listener” tab. Here you need to create a listener with a name, port, and IP address. Then click next.


Next we will use that shared directory I was telling you about. It will be used to store a backup of the database so that you can sync up the other nodes of the group. Select “Full” and then enter the path to your shared folder.


Next the wizard will validate your setup, and then you will click next to start the process to setup the group. After all has completed you can click close.


Now under AlwaysOn High Availability (after you refresh SQL Management Studio) you should see a group.


Also note that it has listed which nodes are primary and secondary.

At this point you now can point clients at the listener and they can talk to the databases in your group!

But what about the failover?

Well I figured that showing you screenshots would be kinda dumb, so I built a two node group and put together a video where I manually failover between nodes, and then force a failover by rebooting a node.

Check out the video!

Video information:

I used VMware Workstation to create two Windows 2012 Server nodes and a Windows 2008 R2 domain controller. I then installed Failover Clustering on the two 2012 nodes as well as SQL Server 2014 CTP1.

So how can you get started ? Well I would start by doing more research. If you are going to have both nodes on a single site you would probably be ok, but if you are going to look at doing a second site, or a split site two node cluster or something like that you will probably want to look into how to create a file share witness so that you avoid some split brain stuff, and make sure that services stay at your main site in the event of a network outage.

Anyhow, Thanks for reading!

How-To: Migrate MS SQL Cluster to a New SAN

During a recent SAN installation I ran into a Microsoft SQL cluster that needed to be migrated from an old SAN to a new SAN. After doing a bunch of reading and testing I developed the plan listed below. As it turned out the cluster that needed to be migrated was using shared VMDK files instead of RDM’s so I was able to just migrate the VMDK’s but I thought I would share the plan for migrating a cluster using RDM disks just in case someone else runs into this situation.

Part 1. Present the new LUNs

Because the SQL servers are virtual machines using RDM’s I needed to create 3 new LUNs on the new SAN and present them to the VMware servers. These three LUNs would be used for: 1. Cluster Quorum Disk 2. MSDTC Disk 3. SQL Data Disk. I wont dive deep into this step as it would be different for each SAN vendor, but in summary, create your new LUNs as needed and add them to the storage group that is presented to your VMware hosts, after that rescan all of your VMware HBA’s and verify that the VMware hosts can see the LUNs.

Part 2. Add New RDM’s to Primary Cluster Node

Next we will add each of the new RDM Disks to our primary cluster node. Technically we would not have to mount them to the primary node, but I’m doing it that way just to keep things organized. Here are the steps for this section:

  1. Open Edit Settings of Node 1
  2. Click Add, then Select Disk
  3. Pick Raw Device Map as the new disk type
  4. Select the Raw LUN that you want to use
  5. Tell it to store the information about the RDM with the VM
  6. Select Physical Compatibility Mode
  7. Select a Virtual SCSI Node device that is unused (And is on a controller that is in physical mode)
  8. Complete the Wizard
  9. Repeat Steps 2 – 8 to add the number of new RDM’s you will need
  10. Now click ok on the edit settings box to commit the changes
  11. After committing, go back into Edit Settings of node 1 and look to see what the file name is for the RDM’s (mine were SQL1_6.vmdk and SQL1_7.vmdk, we will need these to configure node 2)

Part 3. Add Existing RDM’s to Secondary Cluster Node

  1. Open Edit Settings of Node 2
  2. Click Add, then Select Disk
  3. Pick Existing Virtual Disk as the disk type
  4. Browse to where the config files or Node 1 are on the SAN and select the VMDK file that you made note of in step 10 in Part 2
  5. Select a Virtual SCSI Node device that is unused (And is on a controller that is in physical mode, should probably be the same as the first node)
  6. Complete the Wizard
  7. Repeat steps 2 – 6 for the remaining RDM’s that you need to add to the second node
  8. Now click ok on the edit settings box to commit the changes

Part 4. Preparing the new RDM’s in Windows

Note: these steps are preformed only one node 1

  1. Open Disk Management and Rescan the server for new disks
  2. Right click on the first new drive and select “Online”
  3. Right click again on the first new disk and select “Initialize”
  4. Now right click in the right area of the first new disk and pick “Create Volume”
  5. Complete the new volume wizard and assign a temporary drive letter
  6. Repeat Step 2 – 5 for each new drive

Part 5. Add the new drives to the cluster

  1. Open “Failover Cluster Manager”
  2. Expand out the cluster you are working on and select the Storage item in the left tree.
  3. On the right click Add a Disk
  4. Make sure there are check marks beside all of the new drives you wish to add as a cluster disk
  5. Click OK
  6. Verify that the new disks now appear under Available Storage in the middle column

Part 6. Move the Cluster Quorum Disk

  1. Open “Failover Cluster Manager” if you dont still have it open
  2. Right click the cluster you want to modify and select “More actions -> Configure Quorum Settings”
  3. Select “Node and Disk Majority” (or whatever you already have selected)
  4. Select the new Disk that you want to use from the list (it should save “Available Storage” in the right column)
  5. Click next on the confirmation page
  6. Click Finish on the final step after the wizard has completed

Part 7. Move the SQL Data Disk

  1. Open “Failover Cluster Manager”
  2. Expand out the cluster your working on and select “SQL Server” under Services and applications
  3. Select “Add storage” from the menu on the right
  4. Select the new drive from the list, and click OK
  5. In the middle column right click “Name: YourClusterNameHere” and select “Take this resource offline”
  6. Confirm that you want to take SQL offline
  7. Verify that SQL Server and SQL Server Agent are offline
  8. Open Windows Explorer and copy the SQL data from the old drive to the new drive
  9. Back in Failover Cluster Manger right click on the old disk in the middle column and select “Change drive letter”
  10. Make the old drive a temporary drive letter other then what it currently is, Click OK
  11. Confirm that you want to change the drive letter
  12. Next right click the new drive and select change drive letter, set the new drive’s letter to what the old drive was
  13. Again, confirm you want to change the drive letter
  14. Right click on SQL Server and select “Bring this resource online”, do the same for SQL Server Agent
  15. Right Click “Name: YourClusterNameHere” and select “Bring this resource online” in the middle column
  16. Verify that SQL starts and is accessible

Part 8. Moving MS DTC Witness Disk

From what I have read MSDTC’s witness disk cannot be moved like the SQL data can. Instead you simply delete the DTC instance and then recreate it using the disk that you want to use.

  1. Make sure SQL is shutdown
  2. Next Take the DTC instance offline
  3. Make sure to note the IP address of the DTC and the name
  4. Right click and delete the DTC instance
  5. Now right click on “Services and Applications” and select add new
  6. Pick DTC from the list and click next
  7. Fill in the information that you noted from the old instance, but select the new disk this time.
  8. Finish the wizard and make sure that the new instance is online

Part 9. Verify Operational Status

  1. Verify that SQL Server and SQL Agent are online
  2. Verify that MSDTC is online
  3. Login to SQL using a client application and verify functionality

This part is just to make sure that everything is still working. At this point you need to make sure that SQL is back online and that the client applications that it serves are working properly before we remove our old drives.

Part 10. Remove old disks from Cluster

  1. Open Failover Cluster Manager
  2. Select Storage
  3. Verify that the disks under “Available Storage” are the old drives
  4. Right click each old drive and select “Delete”
  5. Confirm that you wish to delete the cluster disk

Part 11. Remove Old Disks from VM settings

This part would seem simple, but you must make sure you remove the correct RDM’s otherwise you will have problems. The best way that I found to make absolute sure was to make a node of how big the RDM’s were that I would be removing. Then we can browse the datastore of the primary node and see which VMDK descriptor files show that size. Of course this only works if they are different sizes, otherwise you will have to go but which order they are in windows and which order they are via the SCSI buss numbers in the VM settings.

After determining which disks need to be removed (which VMDK files they are that is):

  1. On the secondary node go into Edit Settings and find which RDM drives have the same file name as the ones identified earlier
  2. Select the Remove button at the top of the hardware information page.
  3. Leave it set to “remove from vm” and don’t select delete from datastore
  4. Click OK to commit the changes
  5. Now go to the primary node’s Edit Settings dialog box
  6. Repeat Steps 2 -4, but this time tell it to delete them from disk, as we no longer need the descriptor VMDK files for those RDM’s
  7. Now that there should be nothing else using those RDM’s you can delete them from your old SAN or un-mask those LUNs from your VMware hosts.

Accelerating Branch Office File Sharing with MS BranchCache

I’ve done a bunch of articles about how to save your bandwidth when you are trying to replicate backups and virtual machines to your disaster recovery site, however I don’t think I have ever talked about anything that accelerates active data, in this case shared files.

Here is the official description off of the MS TechNet article at :

“BranchCache is a wide area network (WAN) bandwidth optimization technology that is included in some editions of the Windows Server® “8” Beta and Windows® 8 Consumer Preview operating systems, as well as in some editions of Windows Server® 2008 R2 and Windows® 7. To optimize WAN bandwidth when users access content on remote servers, BranchCache copies content from your main office or hosted cloud content servers and caches the content at branch office locations, allowing client computers at branch offices to access the content locally rather than over the WAN.”

This is a HUGE innovation from Microsoft, I believe that it will not only help traditional enterprises and their branch offices, but also SMB’s that are looking to move  their servers to the cloud, but do not want to deal with very slow file downloads if they are not able to get big pipes.

 So how does it work?

There are two types of BranchCache, one is designed for an office with no “server” at all… meaning that there are only PC’s at that branch location. The other is designed in a way where there is a designated server that does all of the cacheing. Below are two images of how they look, first is Server based BranchCache:


Distributed BranchCache:

In either situation here is an overview of what happens:

1.) A client transfers a document for the first time into a given branch office (defined by a subnet)
2.) The second computer that wants the same file will ask local PC’s or a BranchCache server if they have a local copy of the file
3a.) If they do have a copy of the file they will transfer it locally across the LAN to the computer that requested it.
3b.) If it is not local already the server at the main site will send the file to the requesting PC.
4.) If a client makes changes to the file they are send back to the main file server.

Now you are probably thinking the same thing I first thought…why not just use DFSR?

BranchCache versus Distributed File System Replication

I found a good article on that had the following list of pro’s and con’s:

Branch Cache


  • No version conflicts
  • Fast access for subsequent access


  • Slow access for first time access
  • Slow write access



  • Quick read/write access to data at all times
  • A limited amount of additional data security


  • Backlogs can occur very easily
  • Version conflicts can be an issue with backlogs
  • Replication can take too long – not suited to real-time access to files between offices.

My Next Steps

I have not actually set this up in the lab yet, but that is the next step in the process for me. I plan to utilize some virtual Windows 2008 R2 servers with some Windows 7 Desktops (make not they must be Enterprise or Ultimate) and make a virtual router or to so that I can make it believe they are on different offices (which again are designated by different subnets). I will report back after testing and let you know what my findings are and exactly how to set it up.

The Missing Manual Part 3: Application Aware Backups

One thing that I don’t think is stressed enough in the Veeam manual is the “Application Aware” check box that is, by default, not checked. Because Veeam (like most other image level backup softwares) do not do a true system state backup like in the the old days, there can be some significant issues if you are restoring an Active Directory domain controller. However Veeam can compensate for this problem if the “Application Aware Image Processing” check box has been selected. If however you restore a domain controller from a backup and that box was not checked then you run the risk of FUBARing Active Directory replication.

For  more information on why the problem happens check out this Microsoft KB article

I’m no active directory expert, but I did get to witness what happens when you restore a DC without the Application Aware processing, and I must say that it is not a fun process to fix the problem. The best way to avoid having to deal with the problem is to just check the Application Aware Image processing box.

After you restore a system from a backup that did not have that box checked symptoms include:

  • Netlogon service is not running
  • Active Directory may not replication between servers
  • Event ID 2103 in Event Viewer. ( The Active directory database has been
    restored using an unsupported procedure.)

I would encourage you to go through your backup jobs and verify that this check box has been checked and that you have valid domain credentials in the proper boxes below it on the same window.


Microsoft Licensing in a Virtual World

I have thought about touching this topic several times, and each time I start to write the post I find myself stopping. I think it’s mostly because this debate is hard for some people to get their hands around, and even when they do they are just pissed off about it. I must say I agree a little bit, because the way Microsoft products are licensed it really does make you want to tell them to stick it… but the downside is that most companies are not willing to go 100% open source with their software (or even just non-Microsoft) so until that changes I figured I better write the article.

Microsoft licensing in a virtual environment is a totally different ballgame than it is in a physical server environment. Many organizations do not fully understand the requirements to keep things legal in Microsoft’s eyes.

To understand Microsoft licensing in a virtual environment we must first start with knowing how it is licensed in a physical environment first. Before we start I should also state that we are talking only about licensing under the Open Licensing programs, NOT OEM. Microsoft’s EULA is written in such a way that a license has a direct relationship with a piece of hardware. Meaning that when you buy a shiny new server and put Windows on it you must buy a license for that server, and when that server dies you can then replace it with a new server and transfer that licenses to the new server. When you transfer the license it is then “stuck” to that piece of hardware for at least 90 days before it can be moved again. As you can see back in the day when virtualization was not heard of licensing was not a huge deal because if a server died chances are that it ran for more then 90 days before dieing, and its replacement server probably lasted at least another 90.

Where that comes back to bite us is with virtualization, specifically vMotion, DRS, and HA in the VMware world. Virtualization has made it very easy to move a Windows instance from one physical server to another, so now if you have any of the above mentioned features there is no way to guarantee that you have only moved your Windows instances once every 90 days. (I guess you could track your event logs and figure it out… but that would just be a pain)

The key point so far is that a Windows license if for an instance of Windows to run on a piece of hardware, and that license is locked to a piece of hardware for no less then 90 days before being transferred. So because of this we can safely say that each instance of Windows requires a Windows license on each physical server that it *could* run on. So even if its not right now, or wasn’t before, but could be in the future…. then it needs a licenses.

Right about now is probably where your thinking that there is no way in heck that this could be managed. And you are thinking correctly, at least if your still thinking about Windows Standard licenses. However we haven’t talked about Windows Enterprise or Datacenter editions yet.

Before virtualization you really only needed to buy Windows Enterprise or Datacenter for very special use cases, because 95% of the time Windows Standard would get you by just fine. But because of additional instance entitlements that come with these licenses they are a great fit for virtualization. Let me explain a little more, Windows Standard allows us to run one instance of Windows per license; Windows Enterprise allows us to run 4 instances per license (as long as they are on the same piece of hardware); and Windows Datacenter allows us to run an UNLIMITED NUMBER of instances on a piece of hardware.

So when your looking over a quote for a new virtualization project and you see a few Windows Datacenter licenses on there don’t be alarmed, because down the road it will save you money, time, and lots of headaches. Microsoft also has an Excel spreadsheet that you can get that will show you the most cost effective options for licensing your virtual environment and  encourage you to check that out (Google: Microsoft virtualization calculator).

The most important take away from this post is this: if you have standard licenses now, and your new virtualization cluster has the ability to migrate VM’s between hardware nodes (no matter what vendor of hypervisor it is) you WILL need additional Microsoft Windows licensing. To make sure you stay legal I would encourage you to work with a local VAR that has done virtualization migration projects in the past.


Hyper-V 3 for Me

Microsoft’s Hyper-V virtualization platform may be a 12 year old hanging out with a bunch of high school seniors, but this pre-teen is mature for his age.  Could Hyper-V be the Doogie Howser of virtual infrastructures?

I am humbled to be invited to share my Hyper-V experiences, opinions, lessons learned, etc. on Justin’s IT Blog.  I am a former co-worker of Justin’s and faithful follower of his blog even though I am not a VMWare administrator;  It’s always nice to know what the other guys are doing.  I have 14 years of experience in the IS/IT field and am currently working in the healthcare industry.  Our Hyper-V environment consists of a single highly-available failover cluster with 120 VMs over seven hosts and makes up approximately one-third of our entire server footprint.

Just like Dr. Howser being able to treat patients as effectively as an older doctor, Microsoft’s Hyper-V is able to meet enterprise-class requirements for stability, performance, and high availability.  However, Doogie, even being a genius, had two character flaws that created each episode’s comedic and learned life lessons.  One such flaw was with lack of wisdom (knowledge gained through experiences).  Whereas matured virtualization hypervisors have been honed by years of use by thousands of customers, Hyper-V hasn’t had that much time to find what works and what doesn’t.  Yes, they’ve paid attention to the lessons learned by Citrix and VMWare but that knowledge cannot equal working through the bugs and challenges of keeping an innovative edge over your competition.  Doogie’s other flaw is that he was a teenager in an adult’s world.  He was torn between socializing with his adolescent friends while still trying to fulfill his adult responsibilities.  Needless to say, conflicts always ensued.  The same comparison can be made with Hyper-V as it is desperately trying to achieve the same level of product maturity and necessary features expected in any medical doctor, yet you still sometimes think “What am I doing putting this much trust and importance into such a young man?”   This brings us to the intended point of this article.  With the release of Windows Server 8, Hyper-V will turn 3 and will bring with it its first mustache.  The difference is this ‘stache is not thin and patchy, it is a full on Burt Reynolds crumb catcher! Our little hypervisor is growing up so fast.

To start off, v.3 will support multiple simultaneous Live Migrations (or VMotions in VMware speak) which VMware shops have enjoyed for many years.  Microsoft is also removing the shared storage requirement to perform a Live Migration which gives more flexibility on the medium in which the VMs are moved (wireless, anyone? No? Good choice!) and the target, even if the target does not have access to the cluster shared volume (VMWare translation: Vdisk).  This is achieved because Hyper-V’s Quick Storage Migration feature is evolving into Live Storage Migration which allows you to move the VM’s disk at the same time as Live Migrating the VM.

In addition to the improvement with uninterrupted migrations of VMs, Hyper-V has learned to do virtual switching.  This was another major feature VMware offered that Microsoft couldn’t match.  With Cisco’s announcement of Hyper-V support with its Nexus 1000V virtual switch, Hyper-V will now be able to support the same inter-VM traffic visibility, shaping, and bandwidth provisioning as ESXi.

To round out the list, Microsoft has increased the scalability of Hyper-V by supporting hosts up to 160 processors (cores and hyperthreads) and 2TB of RAM.  Each VM will support up to 32 vCPUs and 512GB RAM which is a significant improvement over the current 4vCPUs and 8GB RAM.  Also due to an upgrade to the VHDX file format, you will be able to provision up to a 16TB disk.  These scalability improvements along with virtual switching pave the way supporting more enterprise, large-scale apps.

Those squeamish about trusting their health to young doctors (especially when they are the same age as their child) may just want to take another look at Hyper-V when Windows Server 8 is released.  In almost every instance, there is a very significant cost difference between Hyper-V and ESXi environments in both initial acquisition and yearly maintenance.  I believe VMware will have to make some tough pricing changes in the near future or risk losing even more market share to the boy genius.

HP NIC Teaming on 2008 Server Core

As part of an upcoming HyperV project I was tasked with installing 2008 R2 Server Core with the HyperV role on a HP DL360 G7 server.

So I started out by loading the Windows OS just like normal… with the HP Smart Start Boot CD, the only difference was that I selected the Datacenter “Core” install instead of the normal GUI install. Then I sat back and waited, after it was done I logged in and installed the HyperV role as well as joined the machine to the domain. I was able to remotely connect to the server and create a virtual machine to install SCVMM on (which wasn’t too hateful).

But them I got to thinking… I can only assign one NIC to a virtual network ? What the heck !?

Obviously with the GUI installed I would just Team the NIC’s with the HP tool and be on my way… but how the heck was I going to team the NICs when I had no GUI to run the tool from?

Well after some checking around other sites I read that the HP Network Config Utility was installable on Server Core and could be launched, could it be? Plus because I used the Smart Start CD there was no need to go out and download the software and install it because it was already there, I just needed to run the following command:

Then after a short pause I was presented with:

After that it was pretty easy to setup a team with NICs 3 and 4 on the server! After doing that the NIC Team was available to assign to a Virtual Network.

The two problems with this method of teaming the NICs:

  1. There will be finger pointing when it doesn’t work (Microsoft <—> HP) and we all know how well HP NIC teaming software works these days.
  2. The team shows up as an HP Virtual Adapter #1 in the adapter list… there is no way to tell which nics are in the team, this will make management a pain once you have several teams in the same box.

Veeam’s Next Big Thing: Hyper-V

So now that Veeam has (in my opinion) conquered the VMware world, they have turned their sights to Hyper-V.

While I did not get a chance to view the webinar I did watch the video presentation/demo that Gostev gave, and I did see some hints of V6 features that peaked my interest.

One the the things that looks awesome is the fail-back option on the replicas which looks like it replicates the changes back to the main site for you when you fail-back…priceless feature right there.

Click Here to watch the video

And of course you can always visit for more details

Also here is a screenshot of the main interface with all the the “new” stuff that I found highlighted…. V6 looks like it might have alot in store for us. Click the Image to zoom.

Remote Desktop Connection Manager

This morning while wasting some time before the baby’s doctor appointment I was browsing around on Microsoft Connect. I had never heard of it before and thought I would see what was all available on it. For those of you like me who don’t know what Microsoft Connect is, it is a site where they post beta products and you can download them and try them out for free!

Anyhow what brought me to the site was Windows Home Server 2011, but what I didn’t expect to find was a product called Remote Desktop Connection Manager.

It is a very small download, about 760KB, and it installs in about 5 seconds. After launching and adding some groups here is what I have:

So basically what we have is a one stop place to store all credentials (if you choose) and all connection information for every server that has RDP enabled.

Plus you can save the profiles… so for a place like SMSproTech I can go through and add in all the servers for a particular company and then save that profile into a customer profile in our ERP system. Then if other engineers need to connect to a server for maintenance they can open the profile and have the connection information for that customer. Or if you work for a company that does their own IT work, you could create multiple groups and organize all the servers for easy access.

Obviously you still have to create your VPN connection to the server (or if your using this internally)…you have to be able to route to the server… but other then that this will store all your information in one place!

By default subgroups and server connection inherit connection and username and password information from their parent group. But you can change this by right clicking on a  group or server object and going to properties. There you will find a menu where you can control all the normal RDP properties.

This particular box is where you set the group logon credentials… so if you put in domain admin you connect to all servers that use it, without re-entry


This part is pretty neat too, so you can be connected to multiple servers at the same time, and then to switch between them you can just click the name in the left side… no need to move the mouse all the way to the bottom of your screen and find the correct session!


So how do you get this puppy ?

Check out the Microsoft Connect Site Here or here is the direct MS download page

Windows 7 was MY idea ?

OK well it wasn’t my idea because if it was I’d be frickin RICH! Anyhow I’ve been up since about 3AM with the baby, its now about 5:30…it’s crunch time… eyes are getting heavy, fingers are starting to type slower and I’m finding myself start to “time travel” a bit. (We used the term time travel in college when we would be a little drunk and you would fade in and out)

So anyhow back to windows, normally I hate having more then about two columns of desktop icons. I’ve never put much thought into why … I just don’t. So since right now I’m at almost a full 6 columns I decided it was time to clean it up a bit.

But for some reason I was holding control down and then moved my finger across the touch-pad and all of a sudden the icons shrunk! Awesome, now I just reduced it to 4 rows without having to delete anything, plus since the icons are smaller it doesn’t seem to bother me quite as much. It shrinks the icons a lot, but only seems to reduce the text a small amount so it’s still not too bad to read.