Moving my Blog to Azure and back with Zerto!

To Azure and Back, while also replacing hardware!

First, Let’s Recap!

In case you haven’t been following along, my latest personal project has been to completely eliminate my home lab and opt to have everything at one location – the colo.

In the second article, I went over how I setup Zerto with Azure so that I could replicate my three VMs over to Azure. I posted that article on September 15th. It took a total of 18 calendar days to complete the hardware part of the migration, but in reality, it was much shorter because I took a week of vacation as well as worked my normal job during those 18 days. In total, I would say I spent about a week doing the migration as well as the hardware swap and the reconfiguration of all the new hardware. The total time for the Zerto portions is documented in the video’s below, which account for about an hour or so of the migration.

In the last video, I concluded with my blog VMs replicating to Azure, and I just needed to pick a day to do the failover to Azure (which was really a Zerto Move operation, not a failover.) So in this article, we will pick up there, and show you how the migration to Azure went, as well as the migration back, and lastly an overview of the new lab layout!

I really wanted to get this part posted as its own article before I went on vacation, but time just didn’t permit that to happen. Instead, I’m combining the move to Azure video along with the move back from Azure video all into one post.

Moving to Azure

I moved all three blog related VMs to Azure on Tuesday morning (September 19th), I then shut down the hardware at the colo, as well as the hardware in my garage. After the shutdown, I loaded up the VNX5300, the HPe Blades and headed to down to Dayton. The move process to Azure took less than 20 minutes from start to finish. The Zerto part of the move was actually even less, the rest of the time was spent switching DNS records, changing configuration files for WordPress, and just fumbling around making sure everything was working. Overall it went really well!

I attribute how smooth the migration went to Zerto’s ability to do test failovers, without taking down production I did find a problem with my VMs that would have caused problems on moving day. The problem was that by default Ubuntu and Debian do not have the required agents running to allow Azure to configure the networking properly. So, during the test failover, I was able to find the problem and fix it. To fix it I installed the Azure agents in my production VMs which were still running at the colo. After a few minutes, I was able to rerun the failover test and it worked as expected after the agents were installed.

Here is the video of the actual move to Azure, total time even with some manual config file changes in the VM is 17m 50s.

Returning from Azure

So after I reconfigured the two “production” blades and presented storage and networking to them, I was ready to move the workload back to the colo.

I have to admit, while my blog was in Azure it was running faster than it had in years! Page load times were down significantly and overall it was much snappier. I even considered leaving it run in Azure, but getting a bill from Azure when I have perfectly good servers at a colo seemed silly. So, in the end, I did move everything back to the colo…. but I just wanted to point out that I could have left the workload there… it was strictly a financial decision.

So here is the video of failback from Azure

The New Lab

So now that the blog is back at the colo, I focused some time on getting the other 6 blades online and put to work. The new lab has to support everything that my home lab did along with everything that my old colo hardware did and after a few late evenings of configuration work, I think I have it all back to the way I want it.

Here is a quick diagram of the blades, I color coded them on the diagram and their purposes are listed below:

Color Key:

Dark Blue

Gen8 dual proc AMD Blades with 32 cores and 100GB of RAM per blade – These blades are AMD based blades that are my “Production” blades, they directly replace my old Colo Blades and run workloads that are static and need to be up 24×7. They also run Zerto, but as a Zerto Cloud Service Provider, this allows me to pair to my Zerto lab so that I can demonstrate replication to a ZCSP.

Light Green

Gen8 dual proc Intel Blades 8 cores and 118GB of RAM per blade – These blades were going to be used for Linux / KVM lab purposes, but since Zerto doesn’t currently support KVM I had some extra blades…. What to do with them? Why not a “competitive” lab? It’s not often that you have extra hardware that you can use as a playground. So this is an area where I can run software that I wouldn’t want in my Zerto lab, as well as take apart after testing a product and reconfigure as needed. (Still waiting for that Veeam v10 release…..maybe they will send me a beta? LOL)

Orange

Gen8 dual proc Intel Blades 8 cores and 40GB of RAM per blade – The orange blades are my Hyper-V blades. These run Hyper-V 2016 with SCVMM and have the latest version of Zerto running on them. I use them to demo Zerto as well as test replication to VMware, Azure, AWS, and my ZCSP deployment.

Red

Gen7 dual proc Intel Blades 12 cores and 64GB of RAM per blade – These are the only G7 series blades left in my chassis. They are running vSphere 6.0 (because they have issues with 6.5). They serve as “Site 1” in my Zerto lab. They are configured in a simple 2 node HA cluster with DRS enabled and Fiber Channel storage.

Do they do anything else?

Of course! A bunch of hardware in a replication lab wouldn’t be much fun without something to replicate. So, as I study for certifications I spin up VM’s on the blades that are in the demo lab to utilize their RAM and CPU capacity. I also have my DMZ accessible to all blades on a separate VLAN. That makes it super nice if I want to throw a VM with a public IP…. like a game server or something 🙂

Endless possibilities! So stay tuned because if I do a how to post, you can bet that it’s running on this gear!

Loading

Share This Post

Post Comment