Ubiquiti Unifi Virtual Appliance

I have some Unifi wireless AP‘s at my house and was trying to find a virtual appliance version of the Unifi controller but was unable to. So I went ahead and created one myself. You are welcome to use it, but it does not come with any support or warranty from me. 🙂 It is simply a minimal Ubuntu 16.04 LTS install along with the proper packages to run the Unifi 5.0.7 controller software. It also has the Unifi Controller software pre-installed, so it will boot up and Unifi will be started automatically!

Before you see the dashboard like the screenshot below you will need to walk through the initial config because this appliance has a fresh install of the controller software. If you plan to import a configuration file from an existing controller I would not adopt any AP’s during the initial config, nor would I configure any SSID’s…. those will be imported automatically when you restore the config.

2016-07-08_14-06-20

When you fire it up, the credentials are ‘unifi/unifi’, and if you want root access you can sudo with the same password.

By default, it will try to pull DHCP from whatever virtual network it is attached to, but you are welcome to use the normal Ubuntu “interfaces” file to set a static IP.

My deployment

I deployed this appliance for myself and was able to successfully import a backup of the config from my Windows-based controller without any issues. The coolest part was that all I had to do to migrate my AP’s to this new controller was shut down the old controller and import the config to this one! That’s AWESOME!

2016-07-08_14-40-37

Now I just need to get some Unifi switches and a router to complete the Unifi Puzzle!

Looking for the UniFi Hardware?

If you haven’t completed your Ubiquiti Unifi hardware deployment, Amazon has great prices on all the UniFi hardware.

UniFi Security Gateway Unifi PoE Switch Unifi Wireless Access Point

OVF Download

I’ll try to keep this up to date as I update my controller with major releases. Please note that automatic Ubuntu security updates are not enabled on this appliance so I would highly recommend that occasionally you install those.

Unifi 5.0.7 – Ubuntu 16.04

Username: unifi Password: unifi

Download Size: 948MB

https://1drv.ms/u/s!ANFHYV92O1unqT0

 

MS SQL AlwaysOn Availability Groups, a better way to cluster

I’ve done posts before about how much I hate Microsoft Clusters, mainly because of their shared disk requirements. While VMware does support a virtualized Microsoft cluster with shared disks, the bottom line is that its just a pain in the ass, so when I heard about SQL AAG’s (AlwaysOn Availability Groups) at VMworld the other week I was pretty excited. Sure, they have been out for a while (they came out with SQL 2012) but apparently I live under a rock.

So at a high level thing of an SQL AAG just like you would an Exchange DAG. It allows you to use multiple instances of Windows and a specific application (in this case SQL) to create a highly available service just like you would with an old school Microsoft cluster. But there are actually a lot more benefits.

In the example below from a MS document, it shows how you can use this functionality not only to make SQL highly available at your primary datacenter, but you can also use it to synchronously or asynchronously replicate that data to another SQL server at a DR site.

ag topology

I’m a big fan of using application specific DR technologies, don’t get me wrong Zerto is awesome and Veeam does a good job too (As do most other products out there), but with this technology your going to get a SQL instance that is already running at DR the moment that things go down at your main site… it’s hard to beat that!

Ok so the whats the downside right ? Well it’s all about the money of course! Basically you’re going to get hit with two things… you need more disk space and you might need more licenses. Let me explain.

SQL server allows you to have a passive copy of SQL running, so if you intend to only have a two node cluster and the second copy of the data is 100% passive unless the first fails then you need no extra licenses outside of whatever is required for one server. (Oh BTW you need SQL Enterprise or the AlwaysOn features) If however you want more than one passive copy, or if you want to read from a passive copy (for doing backups, or to just load balance the reads or something) then you have to fully licenses those copies as well.

For more information on Microsoft SQL licensing see here.

On the disk side things are a little more flexible. Obviously you could use physical servers if your environment requires it, you can also make a bunch of virtual machines, but one option you can’t do with Microsoft clusters is put some in the cloud and some on premise. That is probably one of the coolest things about the AlwaysOn feature is that if you have a hybrid cloud it will allow you to take advantage of both sites for a highly available solution.

How to make it happen

The first thing you need to do is install two or more copies of Windows, I used Server 2012 Datacenter for my testing. After Windows is installed make sure to join the machines to your domain and set up the networking properly.

The second step is to install and configure Failover Cluster services. To do that you need to add the Failover Cluster feature to all of your nodes, once it is on your nodes login to one of them and start the Failover Manager.

We need to run the Create Cluster Wizard. Once we start the wizard add all of your nodes to the c luster.

cluster1

Next allow the wizard to run the validation tests and make sure everything is ok. Then enter the cluster name and IP address.

cluster6

Then click next through the Create Cluster wizard. One note, I unchecked the “Add all eligible storage to the cluster” option as I don’t want any storage managed by the cluster. You should now have a cluster with your nodes in it, and it should look something like this one.

cluster9

Once you finish the wizard you can minimize the Failover Manager.

Next install SQL Server 2012 (or in my case 2014, just because I didn’t have 2012 on my laptop). Do a standard installation just like you would normally do, do not do a cluster installation…. just a normal stand alone install. Do this on each node that you want in the AlwaysOn availability group. Note: you will want to install the SQL server service to use a domain login.

After SQL is installed we need to enable AlwaysOn, to do that open up the SQL Configuration manager and click on “SQL Server Services”. In the main part of the page you should see “SQL Server (MSSQLSERVER)” right click on it and select Properties. In the properties dialog box select the “AlwaysOn High Availability” tab. You then need to check the “Enable AlwaysOn Availability Groups” box and then click OK to exit the properties.

enable always on

After enabling AlwaysOn you will need to restart SQL Server, to do that you can go into services or you can simply right click on the same place you did a minute ago and restart it from within the SQL Configuration Manager.

Now inside of SQL Management Studio you should be able to expand out the AlwaysOn section, however there wont be anything in it yet.

aag in management studio

Before we can create an availability group we need something to protect so create a database and make sure the logging mode is set to Full. Also for the nodes to get in sync with each other you will need a shared folder somewhere on your network, if you don’t already have one create one now.

After you have created your database go back down to “AlwaysOn High Availability” and right click, then select “New Availability Group”.

Give the group a name.

aag2

Next select the databases that you want to be in this group. You can select one or many databases.

aag3

Next you will need to specify which servers are going to be replicas. To do this click the Add Replica button and type the name of the other server(s) you want to replicate to.

aag5

Once you have all of your nodes added, make sure to select the proper options as to what you want them to be doing, in most cases you will want them to automatically failover and be synchronous, but remember if you select “Readable Secondary” then technically you need to fully licenses that copy of SQL as it will no longer be considered passive.

aag7

Before clicking next select the “Listener” tab. Here you need to create a listener with a name, port, and IP address. Then click next.

aag8

Next we will use that shared directory I was telling you about. It will be used to store a backup of the database so that you can sync up the other nodes of the group. Select “Full” and then enter the path to your shared folder.

aag9

Next the wizard will validate your setup, and then you will click next to start the process to setup the group. After all has completed you can click close.

aag12

Now under AlwaysOn High Availability (after you refresh SQL Management Studio) you should see a group.

aag13

Also note that it has listed which nodes are primary and secondary.

At this point you now can point clients at the listener and they can talk to the databases in your group!

But what about the failover?

Well I figured that showing you screenshots would be kinda dumb, so I built a two node group and put together a video where I manually failover between nodes, and then force a failover by rebooting a node.

Check out the video!

Video information:

I used VMware Workstation to create two Windows 2012 Server nodes and a Windows 2008 R2 domain controller. I then installed Failover Clustering on the two 2012 nodes as well as SQL Server 2014 CTP1.

So how can you get started ? Well I would start by doing more research. If you are going to have both nodes on a single site you would probably be ok, but if you are going to look at doing a second site, or a split site two node cluster or something like that you will probably want to look into how to create a file share witness so that you avoid some split brain stuff, and make sure that services stay at your main site in the event of a network outage.

Anyhow, Thanks for reading!

Lima Ohio Raspberry Pi Users Meeting Saturday

Just a quick reminder that this Saturday July 13th is the first Raspberry Pi users group meeting.

The event is being sponsored by the Lima Regional IT Alliance (LRITA.org) as well as MCM Electronics. There will be Pizza, and an LRITA USB thumb drive for everyone who attends,  and as a bonus both LRITA and MCM Electronics will be giving out Raspberry Pi’s (some with extra goodies) as door prizes!

For more information and to sign up check out the official LRITA page here: http://www.lrita.org/events/bit-talk/july-13,-2013-raspberry-pi-meet-up.aspx

Lastly I want to give a huge shout out to Brian and the team at MCM Electronics, I literally emailed them this morning and Brian no only agreed to get some door prizes for the event, but will actually be attending as well! If however you are like me and never win anything I would definitely recommend checking out MCM’s site… it’s where I order all my Raspberry Pi stuff and since they are less than two hours away I can go from “hair brain idea” to having hardware next day when I order from them!

How to Expand Zimbra Appliance Hard Drive and Mail Storage Capacity

Lately I’ve been testing out the new Zimbra 8 Beta appliance in both my lab as well as on the cluster where I host my blog and some other sites. One of the first things I noticed was that by default the appliance version only ships with a 12GB VMDK for the message store. This would probably be more then enough for lab purposes but I decided that I needed to expand it to 50GB, which is probably a much more reasonable size for an SMB.

We know that the appliance runs on Ubuntu 10.04 LTS and that its using LVM for volume management. Basically what we need to do is go into the properties and find “Hard Disk 2”, then we raise it from 12GB to whatever size we want, I picked 50GB. Then we need to login to the Zimbra CLI, to do this use root as the username and vmware as the password, unless you changed it during the Zimbra setup.

Overview of LVM

Before I get into the nuts and bolts of what to type to get things expanded I thought I would first talk a little about how LVM works, as it might help you understand some of the steps and why we need to do what we are going to do.

LVM stands for logical volume manager, and it is a more dynamic way to manage storage on a linux system than the traditional partition managers. There are several terms that we will be mentioning in the article, they are: Physical Volume, Volume Group, Logical Volume.

Physical Volume: A Physical Volume is a partition or block device that will actually hold the 1’s and 0’s that you plan to write.

Volume Group: A Volume Group is a group of Physical Volumes that are grouped together to provide a pool of space.

Logical Volume: A Logical Volume is a bucket, or area of space from a volume group that is used in the linux operating system much like a traditional disk or partition is. This is the “device” that is formatted with a file system, and the part that is mounted.

So in LVM terms, we are going to grow our Physical Volume, then we will add that new space to our Logical Volume, and finally we will tell the EXT3 file system that it has just grew by 38GB for a total of 50GB. The best part is that we can do it all while the server (and volume) is online and working!

Steps to Expansion

Once you’re on the CLI we can start to do what we need to make the expansion happen, first lets start by making sure that Ubuntu sees the larger drive; to do that we can type “cfdisk /dev/sdb”. We should see 50GB on the right, and not 12GB.

After verifying that the volume is showing the larger size in the partition editor we can start the process, just make sure to exit cfdisk without changing anything.

The first step is to expand our physical volume, to do that we need to run this command:

pvresize /dev/sdb

After running this command we can run “pvdisplay” and we will see the statistics about out physical volume /dev/sdb, and it will tell us that it is now seen as 50GB and that there are “Free PE”, see the screenshot blow.

Step 2 is verify that the Volume Group sees the additional space, to check this run the following command:

vgdisplay data_vg

If all is well you should see 12GB in use and 38GB free.

Next we need to move that 38GB’s of free space into allocated space to our data_vg volume group.

The command to do that part is:

lvresize -L +38GB /dev/mapper/data_vg-zimbra

Finally the last step now that our new 38GB has been allocated to the data_vg volume group is to expand our ext3 file system. We can do that while the volume is in use by running this command:

resize2fs -p /dev/mapper/data_vg-zimbra

After that has completed we are finished! If you want to double check that all is well you can run df -h to verify the size of the logical volume mounted at /opt/zimbra.

How-To: Migrate MS SQL Cluster to a New SAN

During a recent SAN installation I ran into a Microsoft SQL cluster that needed to be migrated from an old SAN to a new SAN. After doing a bunch of reading and testing I developed the plan listed below. As it turned out the cluster that needed to be migrated was using shared VMDK files instead of RDM’s so I was able to just migrate the VMDK’s but I thought I would share the plan for migrating a cluster using RDM disks just in case someone else runs into this situation.

Part 1. Present the new LUNs

Because the SQL servers are virtual machines using RDM’s I needed to create 3 new LUNs on the new SAN and present them to the VMware servers. These three LUNs would be used for: 1. Cluster Quorum Disk 2. MSDTC Disk 3. SQL Data Disk. I wont dive deep into this step as it would be different for each SAN vendor, but in summary, create your new LUNs as needed and add them to the storage group that is presented to your VMware hosts, after that rescan all of your VMware HBA’s and verify that the VMware hosts can see the LUNs.

Part 2. Add New RDM’s to Primary Cluster Node

Next we will add each of the new RDM Disks to our primary cluster node. Technically we would not have to mount them to the primary node, but I’m doing it that way just to keep things organized. Here are the steps for this section:

  1. Open Edit Settings of Node 1
  2. Click Add, then Select Disk
  3. Pick Raw Device Map as the new disk type
  4. Select the Raw LUN that you want to use
  5. Tell it to store the information about the RDM with the VM
  6. Select Physical Compatibility Mode
  7. Select a Virtual SCSI Node device that is unused (And is on a controller that is in physical mode)
  8. Complete the Wizard
  9. Repeat Steps 2 – 8 to add the number of new RDM’s you will need
  10. Now click ok on the edit settings box to commit the changes
  11. After committing, go back into Edit Settings of node 1 and look to see what the file name is for the RDM’s (mine were SQL1_6.vmdk and SQL1_7.vmdk, we will need these to configure node 2)

Part 3. Add Existing RDM’s to Secondary Cluster Node

  1. Open Edit Settings of Node 2
  2. Click Add, then Select Disk
  3. Pick Existing Virtual Disk as the disk type
  4. Browse to where the config files or Node 1 are on the SAN and select the VMDK file that you made note of in step 10 in Part 2
  5. Select a Virtual SCSI Node device that is unused (And is on a controller that is in physical mode, should probably be the same as the first node)
  6. Complete the Wizard
  7. Repeat steps 2 – 6 for the remaining RDM’s that you need to add to the second node
  8. Now click ok on the edit settings box to commit the changes

Part 4. Preparing the new RDM’s in Windows

Note: these steps are preformed only one node 1

  1. Open Disk Management and Rescan the server for new disks
  2. Right click on the first new drive and select “Online”
  3. Right click again on the first new disk and select “Initialize”
  4. Now right click in the right area of the first new disk and pick “Create Volume”
  5. Complete the new volume wizard and assign a temporary drive letter
  6. Repeat Step 2 – 5 for each new drive

Part 5. Add the new drives to the cluster

  1. Open “Failover Cluster Manager”
  2. Expand out the cluster you are working on and select the Storage item in the left tree.
  3. On the right click Add a Disk
  4. Make sure there are check marks beside all of the new drives you wish to add as a cluster disk
  5. Click OK
  6. Verify that the new disks now appear under Available Storage in the middle column

Part 6. Move the Cluster Quorum Disk

  1. Open “Failover Cluster Manager” if you dont still have it open
  2. Right click the cluster you want to modify and select “More actions -> Configure Quorum Settings”
  3. Select “Node and Disk Majority” (or whatever you already have selected)
  4. Select the new Disk that you want to use from the list (it should save “Available Storage” in the right column)
  5. Click next on the confirmation page
  6. Click Finish on the final step after the wizard has completed

Part 7. Move the SQL Data Disk

  1. Open “Failover Cluster Manager”
  2. Expand out the cluster your working on and select “SQL Server” under Services and applications
  3. Select “Add storage” from the menu on the right
  4. Select the new drive from the list, and click OK
  5. In the middle column right click “Name: YourClusterNameHere” and select “Take this resource offline”
  6. Confirm that you want to take SQL offline
  7. Verify that SQL Server and SQL Server Agent are offline
  8. Open Windows Explorer and copy the SQL data from the old drive to the new drive
  9. Back in Failover Cluster Manger right click on the old disk in the middle column and select “Change drive letter”
  10. Make the old drive a temporary drive letter other then what it currently is, Click OK
  11. Confirm that you want to change the drive letter
  12. Next right click the new drive and select change drive letter, set the new drive’s letter to what the old drive was
  13. Again, confirm you want to change the drive letter
  14. Right click on SQL Server and select “Bring this resource online”, do the same for SQL Server Agent
  15. Right Click “Name: YourClusterNameHere” and select “Bring this resource online” in the middle column
  16. Verify that SQL starts and is accessible

Part 8. Moving MS DTC Witness Disk

From what I have read MSDTC’s witness disk cannot be moved like the SQL data can. Instead you simply delete the DTC instance and then recreate it using the disk that you want to use.

  1. Make sure SQL is shutdown
  2. Next Take the DTC instance offline
  3. Make sure to note the IP address of the DTC and the name
  4. Right click and delete the DTC instance
  5. Now right click on “Services and Applications” and select add new
  6. Pick DTC from the list and click next
  7. Fill in the information that you noted from the old instance, but select the new disk this time.
  8. Finish the wizard and make sure that the new instance is online

Part 9. Verify Operational Status

  1. Verify that SQL Server and SQL Agent are online
  2. Verify that MSDTC is online
  3. Login to SQL using a client application and verify functionality

This part is just to make sure that everything is still working. At this point you need to make sure that SQL is back online and that the client applications that it serves are working properly before we remove our old drives.

Part 10. Remove old disks from Cluster

  1. Open Failover Cluster Manager
  2. Select Storage
  3. Verify that the disks under “Available Storage” are the old drives
  4. Right click each old drive and select “Delete”
  5. Confirm that you wish to delete the cluster disk

Part 11. Remove Old Disks from VM settings

This part would seem simple, but you must make sure you remove the correct RDM’s otherwise you will have problems. The best way that I found to make absolute sure was to make a node of how big the RDM’s were that I would be removing. Then we can browse the datastore of the primary node and see which VMDK descriptor files show that size. Of course this only works if they are different sizes, otherwise you will have to go but which order they are in windows and which order they are via the SCSI buss numbers in the VM settings.

After determining which disks need to be removed (which VMDK files they are that is):

  1. On the secondary node go into Edit Settings and find which RDM drives have the same file name as the ones identified earlier
  2. Select the Remove button at the top of the hardware information page.
  3. Leave it set to “remove from vm” and don’t select delete from datastore
  4. Click OK to commit the changes
  5. Now go to the primary node’s Edit Settings dialog box
  6. Repeat Steps 2 -4, but this time tell it to delete them from disk, as we no longer need the descriptor VMDK files for those RDM’s
  7. Now that there should be nothing else using those RDM’s you can delete them from your old SAN or un-mask those LUNs from your VMware hosts.

Defragmenting your Iomega IX2-200

Disclaimer: As with all howto posts here, I do not take any responsibility if you lose your data. I have done research to make sure that all information provided here is as accurate as possible, but there are always variables. Make sure that the information below is relevant your environment and make sure to always have a complete backup before trying this at home (or the office) 🙂

I have two Iomega IX2-200 NAS devices, they do a great job of storing my media as well as providing some iSCSI and NFS storage for my VMware lab. Unfortunately because they only have two drives in them they have always been fairly slow, but in an effort to squeeze as much performance out of them as possible I decided to see if the main XFS file system could be defragmented. Much to my surprise, the utilities to do so were already installed and ready to run on one of my two IX2-200’s.

Note for IX2’s that are not the “Cloud” version

I should make a note that even though my IX2-200’s are both the non-cloud versions, I did have a friend upgrade one of them to the cloud version’s firmware. The cloud IX2 is the one that includes the XFS defrag utilities, however I found that all i needed to do was use SFTP to copy the utility over to the non cloud IX2 and it ran fine there as well. Since you may not have a cloud IX2 available to you I have tar’ed the file that is missing and you can download it HERE. After downloading it use something like WinSCP to transfer it to the root directory of your IX2 and when needed, you can run it by typing ./xfs_fsr …. Please note if you dont already have SSH enabled you will need to do the next part before trying to upload the xfs_fsr utility to your IX2.

Enabling SSH

In order to get to the linux shell we need to enable SSH on the IX2, this is a fairly simple task that is done from the web interface. Here are the steps:

  1. First login to the IX2-200 via the normal admin page.
  2. Next browse to https://The_NAS_IP/support.html
  3. Click “Support Access”
  4. Check the box next to “Allow remote access for support (SSH and SFTP)”

Here is a screenshot of the non-cloud version of the IX2

Now that you have SSH enabled we can login using Putty or other SSH utilities. The username is root and the password is a combination of your web interface password and “soho”. For example if your password is “jpaul” then your root password is “sohojpaul”.

Do you need to defragment ?

After we have enabled SSH and uploaded the xfs_fsr utility we can now check to see if we really need to defrag our NAS. To help make things easier I have one screenshot which multiple highlighted areas, each with different colors. As I explain the process I will reference the same screenshot and note which color I am referring to.

The first thing to do after getting logged into the IX2 is to type the mount command (highlighted in yellow). This will list the mounted file systems on the NAS, we are looking for the last line in the out put which normally contains “/mnt/pools/A/A0” (If you have a non-cloud version it will say “/mnt/soho_storage”. After you find that look over to the left and copy that information (highlighted in blue) to notepad or something. That is the RAID1 mirror that we will be defragmenting.

Next we need to run the ‘xfs_db’ command (highlighted in purple), this command is the debug command that will help us find out how badly fragmented the file system is. first type ‘xfs_db -r’ then paste in the information you copied (highlighted in blue). This will bring you to a debug shell for XFS. Type ‘frag’, and you will see a % fragmented for your NAS… mine is 85.77% (highlighted in orange). Type ‘quit’ to drop back to the linux shell.

The next command is the actual defragmentation process, and will take several hours to complete. It is highlighted in red. Type ‘xfs_fsr -v’ then paste in the information you copied earlier. PLEASE NOTE: if you have a non-cloud edition IX2 then you will need to type ‘./xfs_fsr -v’ after making sure you are in the directory where you extracted the xfs_fsr utility.

You should start to see something like “extends before:XXX after:1 DONE” this lets you know that the process is running, also if you are close by the NAS you will notice the hard drive light is on almost solid.

Video: HDD light during defrag

Future Prevention

After your defrag completes you are probably thinking that you dont want to wait that long again, nor do you want performance to get degraded if you forget to run it on a regular basis. For this purpose I will explain how to add a scheduled task so that it runs this process automatically each week.

To implement this we will create a cronjob that will execute at midnight on Sunday night (or Monday morning).

Type ‘crontab -e’ as root and you will be brought into a text editor where we can paste the last command that we used before. The only difference is that we need to add the required numbers so the command runs when required.

For my system here is what I needed to paste in:

0 0 * * 0 /sbin/xfs_fsr -v /dev/mapper/40ad0e07_vg-lv4408ab81

For your system you will need to change the “/dev/mapper/40ad0e07_vg-lv4408ab81” to whatever you copied earlier which was in blue.

If you have a non-cloud version of the IX2 then your command will look like this:

0 0 * * 0 /xfs_fsr -v /dev/mapper/40ad0e07_vg-lv4408ab81

If you did not extract the xfs_fsr utility to root ( / ) then modify the command as needed, you will also need to change the /dev/mapper/… to whatever is specific to your system, you copied it down earlier 😉

Ok you should be all set at this point, you should have a NAS with a defraged file system as well as a weekly maintenance task that will run the defrag command each week to keep things working nicely. If you do need to change the time when the command runs please reference this site: http://www.adminschoice.com/crontab-quick-reference as it has a great overview on how crontab works.

Update / Results:

I wrote this post just after starting the defrags on both NAS systems, this morning (the next day) I ran the ‘xfs_db -r’ command again to check to see how much fragmentation was left now that both devices had completed their run.

NAS1 before was 85.77%, Now after the defrag it is only 1.54% fragmented!

NAS2 before was in the neighborhood of 65% (apparently I didn’t record that info), but now it is only .27%!

Hopefully this will make a little difference in my day to day use of the systems.

Accelerating Branch Office File Sharing with MS BranchCache

I’ve done a bunch of articles about how to save your bandwidth when you are trying to replicate backups and virtual machines to your disaster recovery site, however I don’t think I have ever talked about anything that accelerates active data, in this case shared files.

Here is the official description off of the MS TechNet article at http://technet.microsoft.com/library/hh831696 :

“BranchCache is a wide area network (WAN) bandwidth optimization technology that is included in some editions of the Windows Server® “8” Beta and Windows® 8 Consumer Preview operating systems, as well as in some editions of Windows Server® 2008 R2 and Windows® 7. To optimize WAN bandwidth when users access content on remote servers, BranchCache copies content from your main office or hosted cloud content servers and caches the content at branch office locations, allowing client computers at branch offices to access the content locally rather than over the WAN.”

This is a HUGE innovation from Microsoft, I believe that it will not only help traditional enterprises and their branch offices, but also SMB’s that are looking to move  their servers to the cloud, but do not want to deal with very slow file downloads if they are not able to get big pipes.

 So how does it work?

There are two types of BranchCache, one is designed for an office with no “server” at all… meaning that there are only PC’s at that branch location. The other is designed in a way where there is a designated server that does all of the cacheing. Below are two images of how they look, first is Server based BranchCache:

 

Distributed BranchCache:

In either situation here is an overview of what happens:

1.) A client transfers a document for the first time into a given branch office (defined by a subnet)
2.) The second computer that wants the same file will ask local PC’s or a BranchCache server if they have a local copy of the file
3a.) If they do have a copy of the file they will transfer it locally across the LAN to the computer that requested it.
3b.) If it is not local already the server at the main site will send the file to the requesting PC.
4.) If a client makes changes to the file they are send back to the main file server.

Now you are probably thinking the same thing I first thought…why not just use DFSR?

BranchCache versus Distributed File System Replication

I found a good article on ServerFault.com that had the following list of pro’s and con’s:

Branch Cache

Pros

  • No version conflicts
  • Fast access for subsequent access

Cons

  • Slow access for first time access
  • Slow write access

DFSR

Pros

  • Quick read/write access to data at all times
  • A limited amount of additional data security

Cons

  • Backlogs can occur very easily
  • Version conflicts can be an issue with backlogs
  • Replication can take too long – not suited to real-time access to files between offices.

My Next Steps

I have not actually set this up in the lab yet, but that is the next step in the process for me. I plan to utilize some virtual Windows 2008 R2 servers with some Windows 7 Desktops (make not they must be Enterprise or Ultimate) and make a virtual router or to so that I can make it believe they are on different offices (which again are designated by different subnets). I will report back after testing and let you know what my findings are and exactly how to set it up.

The Missing Manual Part 3: Application Aware Backups

One thing that I don’t think is stressed enough in the Veeam manual is the “Application Aware” check box that is, by default, not checked. Because Veeam (like most other image level backup softwares) do not do a true system state backup like in the the old days, there can be some significant issues if you are restoring an Active Directory domain controller. However Veeam can compensate for this problem if the “Application Aware Image Processing” check box has been selected. If however you restore a domain controller from a backup and that box was not checked then you run the risk of FUBARing Active Directory replication.

For  more information on why the problem happens check out this Microsoft KB article http://support.microsoft.com/kb/875495

I’m no active directory expert, but I did get to witness what happens when you restore a DC without the Application Aware processing, and I must say that it is not a fun process to fix the problem. The best way to avoid having to deal with the problem is to just check the Application Aware Image processing box.

After you restore a system from a backup that did not have that box checked symptoms include:

  • Netlogon service is not running
  • Active Directory may not replication between servers
  • Event ID 2103 in Event Viewer. ( The Active directory database has been
    restored using an unsupported procedure.)

I would encourage you to go through your backup jobs and verify that this check box has been checked and that you have valid domain credentials in the proper boxes below it on the same window.

 

vTiger Open Source CRM

This post is a little late because of some time issues I had last week. So without further delay…

This is week number three of my SMB Open Source software series and so far we have a back office server solution and a desktop OS solution. To build on that we are going to look at an CRM system this week that we can run on our server and use from any type of device.

vTiger is an open source Customer Relations Manager project that also has a commercial side, so if you ever need support it is there but to get started we can run the community version for free. So why does our small business need a CRM system ? Lets take a look at all the stuff that vTiger does:

Now you might be thinking that with all these features this stuff has to cost money… well if you want EVERYTHING or if you dont want to host this on your own server then yes … it will cost you money. But remember I said this was going to be open source software …. not all of it will be free though.

Here is a link to their site http://www.vtiger.com

I will say that I have set this up in my lab a couple times and installation is very easy. vTiger’s website also has a lot of documentation and training so it is one of the better options out there for open source CRM platforms.

Till next week…. ttyl

 

Ubuntu Desktop with Zentyal Integration

This week I thought we should build on the Zentyal Linux Small Business Server that we took a look at last Friday. After all what good is a Linux server if you are still accessing it from a Windows desktop right ? (Although because of its Domain Controller Emulation there really is no reason you couldn’t join Windows machines to a Linux controlled domain.) Anyhow, while looking through the documentation for Zentyal I found that there is an Ubuntu Desktop package for both 10.04 and 10.10 that will configure your Ubuntu Desktop to use the Zentyal Server to LDAP authentication, mail server, roaming profiles, and more!

How to get started

Note: Before starting I would encourage you to read this entire page. I have to admit that I tried this twice before getting it to work properly… and it was all due to not reading all of the documentation first.

So to get started you can use an existing or a new Ubuntu desktop that is running either 10.04 or 10.10, I chose to start with a new desktop since I was building all of this in VMware Workstation. After login you need to open up a terminal window and add a custom source location to the apt source.list file, and after doing that it is as simple as issuing one command:

apt-get update; apt-get install zentyal-desktop

This command updates the apt repository list as well as installs the required packages. Once you have went through all of the configuration steps and told it where your Zentyal server is located I rebooted my new desktop machine, just to make sure everything was started properly. At this point you have an Ubuntu desktop machine that will synchronize each users home directory with their home directory on the server when they login, and it will then sync any changes back to the server when the user logs off. This essentially creates a “My Documents” backup for every user as well, because when they save their files to their home directory we will then copy them to the server on logoff.

What does it all provide?

Besides central authentication and roaming profiles, this Zentyal enabled Ubuntu Desktop also provides many other features, some of them include:

  • Oracle OpenOffice
  • Evolution Email client (automatically configured to download mail from server)
  • Pidgin Instant Messenger (auto configed to servers Jabber service)
  • Zafara Groupware
  • Ekiga VoIP softphone client (make calls from each desktop to real phones if you setup a SIP or IAX2 account with a provider)

These are mostly just the features that Zentyal integrates with too, and in the coming weeks I will show you how to also setup an accounting package that leverages a MySQL backend which is stored on the server, and a native Linux client for the desktops! Remember the goal is to provide all the components that an SMB would need to survive without Microsoft.

At this point we now have a back office server which provides Email, Groupware, central authentication, file sharing, and a LAMP stack. This week we also added a Ubuntu desktop operating system that integrates with the server to allow for users to roam from one to another while maintaining their settings. Stay tuned for more updates and more projects to complete our Open Source SMB!