Ubiquiti Unifi Virtual Appliance

I have some Unifi wireless AP‘s at my house and was trying to find a virtual appliance version of the Unifi controller but was unable to. So I went ahead and created one myself. You are welcome to use it, but it does not come with any support or warranty from me. 🙂 It is simply a minimal Ubuntu 16.04 LTS install along with the proper packages to run the Unifi 5.0.7 controller software. It also has the Unifi Controller software pre-installed, so it will boot up and Unifi will be started automatically!

Before you see the dashboard like the screenshot below you will need to walk through the initial config because this appliance has a fresh install of the controller software. If you plan to import a configuration file from an existing controller I would not adopt any AP’s during the initial config, nor would I configure any SSID’s…. those will be imported automatically when you restore the config.

2016-07-08_14-06-20

When you fire it up, the credentials are ‘unifi/unifi’, and if you want root access you can sudo with the same password.

By default, it will try to pull DHCP from whatever virtual network it is attached to, but you are welcome to use the normal Ubuntu “interfaces” file to set a static IP.

My deployment

I deployed this appliance for myself and was able to successfully import a backup of the config from my Windows-based controller without any issues. The coolest part was that all I had to do to migrate my AP’s to this new controller was shut down the old controller and import the config to this one! That’s AWESOME!

2016-07-08_14-40-37

Now I just need to get some Unifi switches and a router to complete the Unifi Puzzle!

Looking for the UniFi Hardware?

If you haven’t completed your Ubiquiti Unifi hardware deployment, Amazon has great prices on all the UniFi hardware.

UniFi Security Gateway Unifi PoE Switch Unifi Wireless Access Point

OVF Download

I’ll try to keep this up to date as I update my controller with major releases. Please note that automatic Ubuntu security updates are not enabled on this appliance so I would highly recommend that occasionally you install those.

Unifi 5.0.7 – Ubuntu 16.04

Username: unifi Password: unifi

Download Size: 948MB

https://1drv.ms/u/s!ANFHYV92O1unqT0

 

Lima Ohio Raspberry Pi Users Meeting Saturday

Just a quick reminder that this Saturday July 13th is the first Raspberry Pi users group meeting.

The event is being sponsored by the Lima Regional IT Alliance (LRITA.org) as well as MCM Electronics. There will be Pizza, and an LRITA USB thumb drive for everyone who attends,  and as a bonus both LRITA and MCM Electronics will be giving out Raspberry Pi’s (some with extra goodies) as door prizes!

For more information and to sign up check out the official LRITA page here: http://www.lrita.org/events/bit-talk/july-13,-2013-raspberry-pi-meet-up.aspx

Lastly I want to give a huge shout out to Brian and the team at MCM Electronics, I literally emailed them this morning and Brian no only agreed to get some door prizes for the event, but will actually be attending as well! If however you are like me and never win anything I would definitely recommend checking out MCM’s site… it’s where I order all my Raspberry Pi stuff and since they are less than two hours away I can go from “hair brain idea” to having hardware next day when I order from them!

Defragmenting your Iomega IX2-200

Disclaimer: As with all howto posts here, I do not take any responsibility if you lose your data. I have done research to make sure that all information provided here is as accurate as possible, but there are always variables. Make sure that the information below is relevant your environment and make sure to always have a complete backup before trying this at home (or the office) 🙂

I have two Iomega IX2-200 NAS devices, they do a great job of storing my media as well as providing some iSCSI and NFS storage for my VMware lab. Unfortunately because they only have two drives in them they have always been fairly slow, but in an effort to squeeze as much performance out of them as possible I decided to see if the main XFS file system could be defragmented. Much to my surprise, the utilities to do so were already installed and ready to run on one of my two IX2-200’s.

Note for IX2’s that are not the “Cloud” version

I should make a note that even though my IX2-200’s are both the non-cloud versions, I did have a friend upgrade one of them to the cloud version’s firmware. The cloud IX2 is the one that includes the XFS defrag utilities, however I found that all i needed to do was use SFTP to copy the utility over to the non cloud IX2 and it ran fine there as well. Since you may not have a cloud IX2 available to you I have tar’ed the file that is missing and you can download it HERE. After downloading it use something like WinSCP to transfer it to the root directory of your IX2 and when needed, you can run it by typing ./xfs_fsr …. Please note if you dont already have SSH enabled you will need to do the next part before trying to upload the xfs_fsr utility to your IX2.

Enabling SSH

In order to get to the linux shell we need to enable SSH on the IX2, this is a fairly simple task that is done from the web interface. Here are the steps:

  1. First login to the IX2-200 via the normal admin page.
  2. Next browse to https://The_NAS_IP/support.html
  3. Click “Support Access”
  4. Check the box next to “Allow remote access for support (SSH and SFTP)”

Here is a screenshot of the non-cloud version of the IX2

Now that you have SSH enabled we can login using Putty or other SSH utilities. The username is root and the password is a combination of your web interface password and “soho”. For example if your password is “jpaul” then your root password is “sohojpaul”.

Do you need to defragment ?

After we have enabled SSH and uploaded the xfs_fsr utility we can now check to see if we really need to defrag our NAS. To help make things easier I have one screenshot which multiple highlighted areas, each with different colors. As I explain the process I will reference the same screenshot and note which color I am referring to.

The first thing to do after getting logged into the IX2 is to type the mount command (highlighted in yellow). This will list the mounted file systems on the NAS, we are looking for the last line in the out put which normally contains “/mnt/pools/A/A0” (If you have a non-cloud version it will say “/mnt/soho_storage”. After you find that look over to the left and copy that information (highlighted in blue) to notepad or something. That is the RAID1 mirror that we will be defragmenting.

Next we need to run the ‘xfs_db’ command (highlighted in purple), this command is the debug command that will help us find out how badly fragmented the file system is. first type ‘xfs_db -r’ then paste in the information you copied (highlighted in blue). This will bring you to a debug shell for XFS. Type ‘frag’, and you will see a % fragmented for your NAS… mine is 85.77% (highlighted in orange). Type ‘quit’ to drop back to the linux shell.

The next command is the actual defragmentation process, and will take several hours to complete. It is highlighted in red. Type ‘xfs_fsr -v’ then paste in the information you copied earlier. PLEASE NOTE: if you have a non-cloud edition IX2 then you will need to type ‘./xfs_fsr -v’ after making sure you are in the directory where you extracted the xfs_fsr utility.

You should start to see something like “extends before:XXX after:1 DONE” this lets you know that the process is running, also if you are close by the NAS you will notice the hard drive light is on almost solid.

Video: HDD light during defrag

Future Prevention

After your defrag completes you are probably thinking that you dont want to wait that long again, nor do you want performance to get degraded if you forget to run it on a regular basis. For this purpose I will explain how to add a scheduled task so that it runs this process automatically each week.

To implement this we will create a cronjob that will execute at midnight on Sunday night (or Monday morning).

Type ‘crontab -e’ as root and you will be brought into a text editor where we can paste the last command that we used before. The only difference is that we need to add the required numbers so the command runs when required.

For my system here is what I needed to paste in:

0 0 * * 0 /sbin/xfs_fsr -v /dev/mapper/40ad0e07_vg-lv4408ab81

For your system you will need to change the “/dev/mapper/40ad0e07_vg-lv4408ab81” to whatever you copied earlier which was in blue.

If you have a non-cloud version of the IX2 then your command will look like this:

0 0 * * 0 /xfs_fsr -v /dev/mapper/40ad0e07_vg-lv4408ab81

If you did not extract the xfs_fsr utility to root ( / ) then modify the command as needed, you will also need to change the /dev/mapper/… to whatever is specific to your system, you copied it down earlier 😉

Ok you should be all set at this point, you should have a NAS with a defraged file system as well as a weekly maintenance task that will run the defrag command each week to keep things working nicely. If you do need to change the time when the command runs please reference this site: http://www.adminschoice.com/crontab-quick-reference as it has a great overview on how crontab works.

Update / Results:

I wrote this post just after starting the defrags on both NAS systems, this morning (the next day) I ran the ‘xfs_db -r’ command again to check to see how much fragmentation was left now that both devices had completed their run.

NAS1 before was 85.77%, Now after the defrag it is only 1.54% fragmented!

NAS2 before was in the neighborhood of 65% (apparently I didn’t record that info), but now it is only .27%!

Hopefully this will make a little difference in my day to day use of the systems.

Updating Zimbra OpenSource Edition

So if you have been running Zimbra Open Source Edition any decent amount of time, you have probably gotten emails similar to this one about there being an update available.

I’ve worked with many open source Linux based projects before and most come with a fairly simple, built in, way to apply the latest updates. So after getting that email I logged into the administrator web interface and poked around, looking for a magical “Update” button. Much to my surprise there was no magical button, so off to Google I went, and after some research I found that to apply the update I just needed to download it (from the URL provided in the email), unzip, and finally run the install file.

So first off I ran wget on the URL provided, and it turned out that the file was over 500MB! Clearly a “non-critical” update was not all I was downloading, and after extracting I found that to apply the update you actually download the entire Zimbra install file. Not exactly a quick download and update, but hey it worked and didnt bitch about any unmet dependencies.

So when you run the installer it detects the older version and asks if you want to update, then it runs through the install/update process which takes about the same amount of time as a new install. The catch is that it will shutdown the services before it starts to make sure that things stay consistent, so if you’re going to update your server make sure to pick an appropriate time as users will not be able to retrieve mail.

Zimbra Adds Document Management Revisioning

I was reading through the Zimbra 7 release notes today and noticed that with this release the Briefcase section has added some new features. Mainly the ability to check out/check in documents and keep revisions of the older versions.

This immediately peaked my interest because I’ve always liked the idea of a collaborative file storage area hosted on a server, which is still easy to use for the end user. The first thing you need to do is share a briefcase. To do this create a new briefcase (or use your personal briefcase) then right lick on it and select “Share Folder”. After doing this then you can type the email address of a user to share the folder with.

After selecting “Share Folder” you will see a new dialog box that allows you to enter an email address of the person you want to share the folder with and what permissions level you want to give them. All you need to know is the email address of the person that you want to share the document with and they will get an email invitation similar to a meeting invite, where they can click Accept or Dismiss. The dialog box that allows you to add people to the share looks like this :

After setting up all the users that need access and they accept you go to the Briefcase Tab and you will then have the shared briefcase in the list on the left. Each document inside of the folder has a list of actions when you right click, they include: Check out / Check in, Send as attachement, etc (see screenshot).

Ben and I played around with a doc file today just to see how the revisioning and check out and check in works, and I have to say it is pretty slick for a free software product. Basically what you do is Select “Check Out File” then you will download the file from the server and it will open in the default editor. So you then edit the file and save it to your PC. Then you go back to the server where the document is located and right click the file and click “Check In File”, then a box will open and you select the file that you modified to upload back to the server and also edit revision notes. That is it!

After three revisions here is what the interface shows: (click to enlarge)

Overall my experience is still positive. I have had it running several months now and have to say it is still working like a champ. It has even caught a bunch of virus’s in the last several weeks without letting anything through. Oh and god knows how much spam it’s caught lately.

Linux software RAID rebuild notes

This week I had to replace a drive in a linux based voip system, the drive was a member of a RAID 1 array holding the OS, boot partition, and swap partition. Replacing the drive was pretty simple, just a one for one swap, and then rebuilding the array was a little more involved then what most of us are used to with hardware RAID systems.

Basically I had to manually copy the partition table from the remaining original drive to the new one, and then tell MD (the linux software raid driver) that I wanted to to add the new drive to the array and to mirror the three partitions to the new drive.

The problem is that because this is a software RAID we are going to burn most of our CPU power and Disk I/O to rebuild the array. Because this system handles voice communications we cannot have all those systems resources being used to rebuild the array because that will severely affect voice services.

To counteract these problems there are two variables that we can modify to slow down the rebuild process so that critical voice services are not affected. These variables are:

speed_limit_min
speed_limit_max

These variables are located in ‘/proc/sys/dev/raid/’ and do exactly what you might expect. The ‘speed_limit_max’ variable limits the rebuilt rate to a certain number of KBps. And the ‘speed_limit_min’ sets the minimum rebuilding rate in KBps. By default the minimum is set to 1,000 KBps and the maximum is set to 200,000 KBps, which leaves a alot of room for variation.

You could use these variables two different ways, the first would be to issue the following command to turn up the minimum rebuild rate, which would increase the priority of rebuilding and get the drives rebuilt faster:

'echo -n 10000 > /proc/sys/dev/raid/speed_limit_min'

But if you are in my situation you can decrease the speed_limit_max so that the rebuilding priority is forced to slow down and free up resources for the rest of the system to use. You can do this by running the following command:

'echo -n 1000 > /proc/sys/dev/raid/speed_limit_max'

To check to see how fast your arrays are rebuilding you can run:

'cat /proc/mdstat'

 

With these commands I was able to control the rebuild rate and allow for normal system operation to keep running while the drive was rebuilding.