Testing Veeam Tape features with HP StoreOnce VTL

One of the things that I don’t work with much is tape… in any fashion. However while working on a project I was asked how Veeam did with tape out. I tried to think of where I had used tape with Veeam before and I couldn’t think of a single instance… which is probably why I have installed so many dedupe appliances 🙂 . So I did the only thing a real geek would do… I started looking for a tape drive for my home lab.

You would think, since people have been saying that tape is dead, that I would be able to pick up something pretty cheap. However I soon figured out that quite the opposite was true if I wanted anything with a fiber channel interface. What to do?

What about VTL? Both Data Domain and HP StoreOnce provide VTL features, but on the StoreOnce VSA you do not need a license to enable it. So I logged into one of the StoreOnce VSA’s that I had already deployed and started messing around. It took very little effort before my Veeam server had a fully functional MSL 2024 attached… and the best part is that it didn’t cost me anything!

So the purpose of this article is to show how to get an HP StoreOnce VSA with the VTL options talking to Veeam Backup and Replication, I won’t cover anything in terms of creating jobs or best practices inside of Veeam because frankly I don’t yet have the experience to be making those recommendations, but as I am able to play around with this more I will share what I learn. Also for owners of Data Domain and / or HP StoreOnce, this article should not be considered a way to utilize those products in a production environment. I have to think that writing data directly to NFS/CIFS on these devices would be better than going through the VTL features. (The only thing I’m thinking that might be useful for this though is extended retention where you want Veeam to keep GFS copies, but after I look into that more I will create a separate article for that).

Getting the HP StoreOnce side ready

After deploying the StoreOnce VSA login to it and click on the VTL option. It should have “Auto Create” enabled.


If it is not enabled, click ‘Edit’ and then enable the option. This is all that is needed on the StoreOnce side ot get started, we could however manually create a VTL, but unless you want to copy and paste IQN’s it’s easier to just let it auto create one, and then we can edit it later.

iSCSI configuration on Veeam Server

The VTL features of StoreOnce VSA rely on an iSCSI connection. So the first thing we need to do on the Veeam server is enable the iSCSI initiator (if it isn’t already enabled), you can enable it by simply opening the iSCSI Initiator in Control Panel.

Then enter the IP address of the HP StoreOnce VSA in the Target box and click ‘Quick Connect’. Then in the Discovered targets area click each of the targets and click connect (at least the ones that are related to HP StorageOnce).


By default you should have one drive and once robot listed in the targets window. After connecting the targets you can check Windows device manager to see if they show up. You are looking for a “Media Changer” and an “unknown device”.


Before we install the drivers for these devices we will want to go back over to the StoreOnce VSA and change properties of the VTL to emulate an MSL 2024 (or whatever MSL you want to play with).

Configuring VTL settings on StoreOnce

Click on the VTL item on the left to expand it then select ‘Library’. On the right side of the page you should now see the VTL that was auto created when you connected to the VTL iSCSI targets.


By default it’s emulating a generic drive, but I wanted it to emulate the same MSL that I was looking at on eBay. So if you click on the library then down at the very bottom of the page select ‘Edit’. Then select the “MSL G3 Series 2×24” emulation type. You can also select how many tape’s you would like the VTL to see too, by default it sets the max number for the emulation type. Click save.

Once this has been changed we can do back over to Windows and install the HP MSL drivers and then do the actual Veeam Configuration.

Installing MSL Drivers on Veeam Tape Server

Some backup software wants you to use only their drivers, but Veeam is different, they aren’t in the driver business so they recommend you use the manufacturer’s drivers. So for the HP MSL series that we are emulating you need to download the drivers from  http://h18006.www1.hp.com/products/storageworks/tapecompatibility.html. FYI I would fully expect this link to break once the HP/Hewlet Packard split happens. So you might have to just google “HP Tape compatibility and Tools”.


Once you get to this page click on the “Hp StoreEver Tape Drivers” link.


At the time of this driving the driver version was 4.0.0 and the file name is cp023805.exe. Download the drivers, and then extract them to a folder somewhere on the Veeam server. I say extract because for whatever reason HP has decided to not allow any of their tape drivers to install in a virtual environment… HP YOU SHOULD FIX THIS…

Here is the error you can expect if you click install.


So after extracting the files to somewhere on the server you can head back into Device Manager and click on the Robot or the drive and select ‘Update Driver’. Then navigate to the folder where you extracted the drivers, and select the OS folder that matches your Veeam server. Then click next through all the pretty boxes until you see a screen that looks like this:


Then repeat for the other related device… either the drive or the robot:


Then you have all the drivers installed that you need. Now we can open Veeam and configure it to see the MSL.

Veeam Tape Configuration

After you get Veeam B&R opened up you can navigate to the Tape Infrastructure area. The first thing we need to do is select “Add a Tape Server” from the list. I installed the iSCSI VTL on the server listed as “This Server”, but if you are planning to dedicate a server to just doing tape, then select the appropriate server from the list.


If there are restrictions on traffic for this tape server you can set those up on on this page otherwise click next


Typical Veeam review page… next.


Here you can monitor the progress of the tape agent install.. then click next.


Normally I don’t show the Finish page, but this one contains an important check box. The “Start tape library inventory” box…make sure that is checked so that you get an initial inventory of your tapes.



When you first open you all of the new things on the left side your tapes will appear under an unknown media pool, but as the inventory happens they will move over to the Free media pool. Once all of the tapes are in the free media pool you are ready to start doing Veeam backup’s to tape!


Lastly I wanted to send a big THANK YOU out to the HP Storage team for creating the StoreOnce VSA… If I didn’t have it it would be much harder to learn and to do my job, so Thank You!

Veeam & Data Domain: Advanced Setup and Replication

Recently while working with a customer we had to get a little creative with an installation and I thought I would share the experience.

Here is a little background info…

Datacenter and Hardware Layout:

  • Customer has 2 datacenter: one on the east coast, one in the midwest
  • Customer has vSphere clusters at both datacenters
  • Customer has purchased one new Data Domain 2500 for each datacenter
  • Customer has purchased Veeam licensing for each datacenter
  • Both vSphere Clusters run some production workloads
  • Replication pipe between sites is 1Gbps with triple redundancy

Customer Requirements:

  • Cross site replication of backups between datacenters
  • Ability to quickly restore a backup from datacenter A at datacenter B (and vise versa)
  • Ability to do restores locally at either site without connectivity to the other datacenter
  • Must use DD Boost to increase backup speed

Given this situation we had a few options:

  1. Install a “master” Veeam server at one location or the other and then create several proxies at each location
  2. Install a “master” Veeam server at each location as well as some proxies

There are advantages and disadvantages to each setup. And we actually tried both. The only real advantage to having a single veeam server is that all jobs were in one pane of glass and all restores could be started from one area. The disadvantages were much more obvious.

What we found is that because the file level restore happen on the “master” Veeam server it took incredibly long to mount backup images from the east coast Data Domain to our midwest Veeam Server. We were getting about 50Mbps on the wire while the mount operation happened… but from the local data domain we were getting almost 300Mbps.

Now for full VM restores I don’t think it would be quite as bad because we can specify a proxy at the remote site to do the full vmdk restore in hotadd mode… that will keep all the data local and it.

The solution

Because we had Data Domain replication in place I decided to see if there would be a way to mount the replicated copy in read only mode to the local Veeam “master” server. The idea is pretty simple… Use a Veeam “master” server at each site for local backup and restore as well as ALWAYS have restore’s pulling data from their local site’s data domain regardless of where the VM was originally backed up.

So here is how it works:

East Coast Datacenter:

Local Veeam “master” server does backups via DDBoost to its local Data Domain in an MTree called “veeam-east”. That MTree then uses native Data Domain replication to the Mid-West Data Domain to an MTree called “veeam-east-replica”.

Mid-West Datacenter:

Local Veeam “master” server does backups via DDBoost to its local Data Domain in an MTree called “veeam-midwest”. Then that MTree is cross replicated to the east coast DD MTree called “veeam-midwest-replica”.

So that takes care of backing up locally, and getting offsite replication… but the cool part is how we can do restores of data from the opposite datacenter…

The replicated MTree’s are placed into a read only mode by default. However! the Data Domain box WILL still let you share that read only folder via CIFS or NFS. Sooo I thought why not just share out the replicated MTree’s via CIFS and have the Veeam “master” mount that share. Sure enough this worked.

So the east coast Veeam “master” server has three backup repositories:

  • the default backup repository
  • one mounted via DDBoost integration called “Veeam-east”
  • one mounted via CIFS called “Veeam-Midwest-Replica”

The midwest Veeam “master” has the mirror of this:

  • a default repo
  • one DDBoost integrated share called “Veeam-midwest”
  • one CIFS mounted repo called “Veeam-east-replica”.

There is only one caveat to this setup. Because Veeam is not actively writing backup data to the CIFS read only share, it does not automatically import new backups that have been replicated. So you do have to preform a manual “Re-Scan” of the CIFS repo so that it pulls in the latest restore points. (However this can be automated with this powershell script http://helpcenter.veeam.com/backup/80/powershell/sync-vbrbackuprepository.html).

So here is what things look like: (click for pdf version)

Veeam Layout

Basically what I have tried to do here is show the backup data path. You will notice a line with arrows at each end between the local Veeam master server and its DDBoost repository, meaning primary backup and restore data flows between them, then a directional arrow between the primary MTree and the replicated MTree showing the direction of DD replication. Lastly is a dashed directional line showing that “if” a restore of data needs to be done at the opposite site, the local Veeam server has a read only copy from the replicated DD MTree.

I should mention that the only reason that you would want to use the Read only copy to do a restore is if your primary site if offline and you need your data back. I didn’t envision this to be a replacement for datacenter migrations… however with some more testing it may actually be a decent option.

Other thoughts

After seeing this particular customers requirements I do see some weakness in Veeam for a multisite deployment. But I’m not sure how many other backup products would really handle it any better. My suggestions would be to somehow use Veeam Enterprise Manager as a “vCenter Linked Mode” type controller. Combine that with DDBoost multisite replication (meaning that because of DDBoost Veeam could control the replication from one site to another at the DD level) and you could then create a multisite backup catalog…. then ideally all Veeam servers would know where all copies of the data are and be able to restore from them without any special hack like the one above.

Other advantages of this would be that restores during a disaster could be faster provided that all backup catalog data was replicated like active directory data to the other Veeam Master servers …

Veeam Cloud Connect Pricing Estimator

So in the process of investigating if I should become a Veeam Cloud Connect partner I have developed some tools that would help estimate both internal costs as well as customer pricing. So I figured I would share the workbooks that I created for customer pricing estimates.

It assumes a few things:

  • that you know how much your provider will be charging per GB of data
  • whether they will charge you a fee per VM that is protected (if they don’t spell it out it’s probably rolled into the price per GB)
  • Detailed knowledge of your backup jobs
    • Basically you need to know the average full backup size of the job (VBK file in backup folder)
    • the average incremental backup size (VBI file in backup folder)
    • the number of normal restore points as well as the number of G-F-S points to keep in he cloud

Don’t worry though, there are some examples provided in the excel doc. This is version 1.0, but if I find any reason to update it, I will post the updates on this page.


Note: Seems like WordPress switched it from a xlsx to xls file… anyhow when you open it in excel complains that the extension doesn’t match the format. You can either switch the extension to xlsx before opening or just click open anyway.

Cloud Connect Price Estimator v1.0


Hadoop as a Veeam Repository

I was looking at the internet search queries that bring some of the readers to my site the other day when I found one that was rather interesting. It was simply “hadoop as veeam repository”.


Now, before I get too far into how to make this actually happen let me just point out a few things. First off, if you haven’t been to the disclaimer page in a while now may be a great time to check that out 🙂 . Second, I don’t think anyone has ever done this before… production or test… so if you are the person who was searching for this article: make sure to test this more thoroughly than I have before betting your job on this.


After thinking about this for a while, the concept seems really genius. The idea behind Hadoop is that you take a bunch of cheap computers with locally attached disks (typically large NLSAS or SATA drives), interconnect them with a high speed network, and use them to store a CRAP LOAD of data. Facebook for example reported that they had 21 PetaBytes in a single cluster… in 2010. So if you have all of that storage, and its cost per GB is pretty cheap, why not use some of it to store backups? After all if the Hadoop admin is managing 21PB of storage do you think he would ever miss a few hundred TB?


I have good news, bad news and some more good news.

First a little good news, Hadoop has a function called NFS Gateway that allows NFS access directly into and out of the Hadoop file system (HDFS). The idea was that uploading data into HDFS for processing by Hadoop and its different languages was a huge pain in the ass. So to make it easier they created an NFS Gateway function to make it as simple as a linux file copy.

Now for some bad news. After checking out most of the Hadoop distributions I have used before, as well as the Apache Hadoop page, I found that HDFS and therefore the NFS Gateway do not support random writes. This is probably going to be a show stopper in 99% of the Hadoop deployments. Basically the reason this is going to make doing backups to HDFS (the underlying file system of Hadoop) impossible, is because HDFS stores data as objects, not as blocks. So if an object needs to be updated with just some data (not all of the data in the object) it basically has to read the entire object into a cache, update it, then write it back out to HDFS. I will do some more testing in this area to see if that is truly a show stopper, but my guess is it will be… If you are a Hadoop Guru and have more information please let me know… would love to be wrong on this part :).

Now for a little more good news.

There is one distribution that I found that doesnt suffer from the short comings of HDFS. Enter MAPR. The MAPR hadoop distribution removes the standard HDFS file system in favor of its own MAPR-FS file system. Because of this they claim it will support pretty much anything you can throw at it in terms of read/write patterns. In my testing it was also the easiest to get working because it came with the NFS Gateway service running out of the box!

Making it work

I don’t want to ruin the ending, but I know the first part of this article is pretty wordy, so let me start by just saying I was able to get Veeam to do backups to the MAPR Hadoop sandbox that I setup. So there is no reason (as long as you’re running the MAPR distro) that this should not work on a larger scale.

The Architecture

For Veeam to write to any linux folder it has to have an agent on that linux machine. That agent can become CPU/Memory intensive if you are pushing enough data through it. For that reason I choose to separate out that role for a dedicated server.

Architecture for Veeam to MAPR-FS
Architecture for Veeam to MAPR-FS

I will explain this in reverse order of the data flow. The backup data will rest on the MAPR Sandbox VM, that VM will also be the NFS Gateway Server. It will receive data from the Linux NFS Client in the middle. The NFS client VM basically uses native NFS protocols to mount the share from the MAPR NFS Gateway, and from this middle point on up it will appear as a normal linux directory. On the Veeam server we do not need to enter any data about the MAPR cluster, instead we only need to know the IP address of the Linux NFS Client. With that information we create a Linux Server object in Veeam and then create a backup repository, making sure to select the NFS mounted directory.

Again the reason I did it this way was so that the Veeam services did not overload the MAPR sandbox VM. I suppose I could have cranked up the CPU and memory on that VM, but this way seemed to make more sense because in a real world scenario you would have many MAPR servers providing disk space and NFS gatway services and if one of them died you would just toss that node. Remember hadoop nodes are like donkeys: we want their heavy lifting capability for the hadoop farm, but we don’t care about them enough  to give them names. So if a node dies we just toss it from the cluster and add a new one later, because of that mentality we wouldn’t want our NFS client running on one of them.

Backup Speed

Also I should note that I was ableto achieve about 60MB/s during the backups, which doesnt seem like a lot, but remember this is a lab environment, and the VM’s being backed up were on a 2 disk RAID 1 (they are 600GB 15k SAS drives). Because all VM’s involved were on the same VMware ESXi host, network was line rate at 1Gbps, I didnt get a chance to test at 10Gbps, but my storage wouldnt support it anyhow. The MAPR VM was located on an iSCSI attached VNXe3200 with 200GB of FAST Cache and 1.8TB of SAS RAID 5 disk.

Restore Speed

The restore speeds on the VM that I tested were just as fast, if not faster, than they are from a local windows based Veeam repository. So even though the data is flowing through many layers it isn’t slowed down much. I think that if I were able to run this against a large MAPR-FS farm the speeds would be even more impressive. See the screenshots below for actual speeds, but It took about 5 minutes to restore a VM with 9.3GB of actual data on it.

Take away

This is certainly something that I want to keep my eye on. I think that there will be very few people who find this article useful right now, many will probably find it interesting but because almost all Hadoop distros dont fully support random writes I think it will be a while before many could take advantage. (I’m not sure what the user base size of Hadoop as a whole is compared to MAPR user base size)


Here are some screenshots of the process. Remember if you click on them they will enlarge.

I added 3 100GB hard drives to my MAPR-FS
I added 3 100GB hard drives to my MAPR-FS
Screenshot of the Veeam backup server pushing data to the MAPR VM as well as process monitors on both the Linux NFS Client VM and the MAPR Sandbox VM. On the right you can also see the web interface of the MAPR sandbox and how many reads and writes its doing
Screenshot of the Veeam backup server pushing data to the MAPR VM as well as process monitors on both the Linux NFS Client VM and the MAPR Sandbox VM. On the right you can also see the web interface of the MAPR sandbox and how many reads and writes its doing
Backup Completed to Hadoop and MAPR-FS
Backup Completed to Hadoop and MAPR-FS
Restore speed from the VNXe3200 holding the MAPR Sandbox After this screenshot it did get up to a max of 81.7MB/s
Restore speed from the VNXe3200 holding the MAPR Sandbox After this screenshot it did get up to a max of 81.7MB/s
Restoring a VM from a Hadoop / MAPR-FS Veeam Repo
Restoring a VM from a Hadoop / MAPR-FS Veeam Repo

Veeam Cloud Connect How To

If you are looking for an alternative to purchasing multiple disk to disk backup appliances you may want to pay close attention to this one. When Veeam released version 8 they stashed away a feature called Cloud Connect. Essentially Cloud Connect lets you take the backup software you already have and do more with it. Before v8 if you wanted offsite Veeam backups our best option was to get a pair of disk to disk backup appliances from EMC Data Domain, Exagrid, HP, or one of the other providers. Then do you backups to the local box, then let that appliance replicate the data offsite to its sister box. Now, before I continue, let me just say that this is still a damn fine solution and one that is tried and true; however for some customers it just doesn’t make sense.

One of the biggest reasons why it may not make sense is cost, the other big reason is location; by that I mean that some customers just don’t have a suitable second location to store the box at, and if you have to colocate the box at a service provider you are looking at hundreds of dollars a month just for that space…. let alone the box.

So where does Veeam Backup and Replication v8 fit ? Cloud Connect (on the client side) is basically just presenting a Veeam Repository for Veeam to use as a destination for a copy job… much like you could use a tape device as a destination for a copy job.

Architecture overview of Veeam Backup with Cloud Connect


Some of the other highlights of this architecture (and Veeam Backup and Replication V8 in general) are end-to-end encryption and WAN acceleration. And to be perfectly honest … if you want to use a service provider to remote your offsite backups… I would HIGHLY encourage you to use the WAN accelerator… even if it means upgrading your license to enterprise plus. In my personal testing on my home WAN to colo setup I seen a minimum of 4x decrease in job time due to using the WAN accelerator.

As for encryption… You can now set an encryption password for each Veeam job. It is as simple as that, your local Veeam password manager will store the password so that you can continue to operate Veeam just like you did before, but if someone outside of your Veeam instance tried to open your backup files they would have to provide the password in order to un-encrypt the data… just don’t lose your password… your service provider has no way to get it back for you.

So how do you setup Veeam to back up to the cloud?

The first step is to create a new service provider object in the Backup Infrastructure area of Veeam. If you are just wanting to test cloud connect to see if you can do the setup you are welcome to use the values in these screenshots as my Cloud Connect server is publicly accessible and I will be providing a generic username and password with a 1GB quota in this article. If you want to actually test this on one of your jobs though you will need to email me so that I can create an account with a larger quota.

Step 1: Create a new Service Provider object from the backup infrastructure area.
Step 1: Create a new Service Provider object from the backup infrastructure area.

So for the DNS name or IP, I am using cc1.jpaul.me, while the port is configurable my guess is that almost all providers will leave this as the standard port.

Step 2: Verify that your service providers SSL certificate is what it should be, and then setup your credentials
Step 2: Verify that your service providers SSL certificate is what it should be, and then setup your credentials

After you click next Veeam will check to see if the service provider is alive and download its SSL cert…. look at me I actually managed to get a REAL cert on this box 🙂

Step 3: asdasd
Step 3: Your service provider will assign these credentials for you

As for a temporary account that you can use if you just want to see how it works use the username bloguser and the password is tryitout

Again this account will only work for my server, and it has a 1 GB quota… If you want to actually test the service and see how long it might take for a job to complete email me at [email protected] and we can set up a separate account for you to test with.

Step 4: asd
Step 4: Details of your Cloud repository will be displayed

Here you will see what your provider has called your repository, as well as its quota size and whether WAN acceleration is enabled on the cloud provider side. Just a note here… a provider that is not offering WAN acceleration is crazy and probably should be avoided in my opinion … it will save both parties a CRAP LOAD (yes its a technical term) of bandwidth, and costs them nothing to enable.

Step 5: Click Finish, your Service provider and Cloud Repository is now ready to use
Step 5: Click Finish, your Service provider and Cloud Repository is now ready to use

As per the Veeam norm, you will have a chance to click Finish on the last step after reviewing what is actually going to happen. After clicking Finish you are ready to create a copy job, and your Veeam Backup and Replication should be fully connected to the cloud connect service provider.

Please note that you may want to skip down to the “Creating a WAN accelerator” section if you have that feature. I listed it last as it isn’t required, but if you have the license you will certainly want to use it.

Creating a copy job

So in order to actually use the service provider we have to get data uploading, to do that Veeam leverages its Copy Job function that has existed for some time now.

Step 1: Name the job
Step 1: Name the job, set copies per day setting

I just give my job a name, and then matched the copy settings to the backup job settings. Meaning, because my backup job only runs one time per day, I leave the copy job setting to 1 time per day. Adjust as needed.

Step 2: asd
Step 2: Add VM’s just like you would to a normal backup job. I used the “From Job” option to match my normal backup job to a cloud connect copy job.

Select what VM’s will need to be backed up to the cloud.

Step 3: asdasdasd
Step 3: Select the cloud repo as well as the number of backups to keep. Then click the Advanced button.

Here you need to select the Cloud Backup repository as the job’s target.

As for what you want to keep on this target… you need to remember that the provider is going to charge you based on the amount of data you are keeping there… not the size of the VM’s you are backing up.

For example: Let’s say you have a job that is backing up some VM’s.. the VBK size of a full backup is 100GB, the size of the VBI files are 10GB for each day. From this information we can do some math.

If you want to keep 7 restore points on disk you will have to keep at least 1 full backup (100GB) plus 6 daily backups (10GB each). So you will need to have at least a 160GB quota to store it. However in my opinion because backups are linked to a full, you will probably balloon up to as many as 14 restore points so if it were me I would plan to have a 400GB quota.

Now here is where things can get more complicated. If you want to also keep some Grandfather/Father/Son backups you will need to multiple the number of those by the full backup size. For example, if I also wanted to keep a weekly full backup for 4 weeks, and monthly full backups for 3 months that means I will need to keep a total of 7 additional full backups at my provider… which would translate to an additional 700GB of storage.

(Im developing a calculator to help calculate this and as soon as I have it finished I will post it here)

Step 4: asdasdasd
Step 4: Set the encryption password inside of the storage tab of the Advanced button.

This is where you set a password for the job’s encryption. Remember the service provider will not be able to retrieve your password, so if you don’t remember what it is all of this work is for nothing when it comes time to do a restore.

Step 5: asdasd
Step 5: Select what WAN accelerator you want to use.

So here is where we can select to use a WAN accelerator… if you dont have a WAN accelerator already setup and you are licensed to use the feature then proceed down to the next section and do that first before setting up your job…. or you can complete the job and then edit it after you setup the accelerator.

Step 6: asdasd
Step 6: Set a blackout window if needed

You are able to black out certain times if you want to avoid production hours… or in the case of my home WAN… the wife’s Facebook hours

(LOL its a good thing she doesn’t read my blog… but lets face if even if she did … she would have stopped reading WAY before this point)

Step 7: asdasd

Step 7: Review step, click finish here 🙂

That is it. You are ready to backup to the your cloud provider. And you just saved yourself a capex… but you added an opex. It’s hard to say which will be better for you… but this is definitely an option worth looking into if you are a smaller shop or just don’t want to have your own offsite storage box.


Bonus: How to enable WAN acceleration

Enabling WAN acceleration takes all of about 2 minutes, and its saves you a massive amount of bandwidth. So let’s look at how to get one going. (BTW, I already had one setup so the screenshots are going to say “Edit” instead of “Create” but the steps are the exact same)

Step 1: Select what server to use as a WAN accelerator
Step 1: Select what server to use as a WAN accelerator

All you really need to do here is select a box that you want the service to run on. For a small environment this could easily be the Veeam server that you use to do your backups too.

Step2: Select where to put its cache files
Step 2: Select where to put its cache files

Veeam will create an on disk cache for storing common information… please note that whatever size you set this to will get used instantly upon completion of the wizard.

Step 3: Review
Step 3: Review, then be done

This is pretty much the last step… the rest of the process is simply applying the changes to the selected server and clicking Finish.

Note: Just because you installed a WAN accelerator doesn’t mean you are using it. If you created your copy job before you had a WAN accelerator setup make sure to go back and add it to the job so that it actually gets used.

Excited about Veeam V8 yet? If you’re a Data Domain customer you should be!

Well before you get too excited let me start by saying, I don’t have any screenshots or cool hands on stuff (yet!). But with that said I am pretty excited about the upcoming release and some of the cool new stuff in store. Before we get to the Data Domain integration lets talk about some of the other cool new stuff you can expect to see in V8.

New Features:

  • Snapshot Integration with NetApp
  • Replication from backup files – With v8 you will be able to choose where the replication jobs obtain VM data from. In addition to production storage, you can now replicate from your backup files. No longer do you have to touch your production environment twice ―thus reducing the impact of data protection activities on your production storage in half.
  • WAN acceleration now works for replication jobs (not just copy jobs)!
  • DR Automation – The next version of Veeam will have the ability to plan out some of your DR steps and then help you execute them.

Full list is here: http://www.veeam.com/blog/v8-feature-announcements-major-replication-enhancements.html

Probably one of the most exciting features for me, is the Data Domain Boost integration. I’ve been working with Data Domain for a while and DDBoost has always proven to be awesome… if your backup application supported it, so the news that my favorite backup platform was going to start supporting it was pretty awesome!

Before I get into how Veeam plans to integrate with DDBoost, you might be asking what is DD Boost… Data Domain Boost, or DD Boost as most call it, is a plugin to a backup application that runs on your backup server. It then offloads part of the deduplication and compression process from the CPU and Memory of the Data Domain and puts it on to the CPU and Memory of your backup servers. This allows the Data Domain to focus on storing data and not get bothered with compression and segment uniqueness processes. (There are also some enhancements that make the backup app aware of DD replication, but I’m not sure if Veeam will be using that or not)

EMC has a short YouTube video that is also helpful here https://www.youtube.com/watch?v=A0vKfAcIve4

So how will Veeam leverage DD Boost ?

Well traditionally if you have a Data Domain and using Veeam you are told to turn off and turn compression down. This takes a lot of CPU and memory load off of your Veeam servers. The problem is that it then puts it on the Data Domain, which can lead to very slow backup and restore times because of an overworked Data Domain system. The only real solution was to purchase a Data Domain with a larger CPU so that it could handle the load. However now that Veeam will integrate with DD Boost we can put that load onto a Veeam server instead. Because Veeam Servers can be run along side your other virtual machines (typically at night) when other systems are not in use…. you can really maximize your investment in Data Domain along with Veeam! (and even your shiny new VMware hosts 🙂 )

Like I said before though… I haven’t been lucky enough to get a hold of any V8 beta code so no pretty screenshots from me. However I was able to find the following PDF done by Rick at Veeam who walks you through the process of setting up the new Data Domain Boost stuff.




Building a Windows 2012 + Veeam 7 Backup Appliance Part 1

No one can deny the advantages of a dedupe appliance, the space and bandwidth savings that they provide are astounding no matter what the brand. While being fairly affordable in all market segments, sometimes the budget just isn’t there… so what to do? Well one option, now that Windows Server 2012 has deduplication built in is to use a physical server loaded with Windows 2012.

Setting up Windows and Configuring Dedupe

To get started get your hands on a server (a physical server with lots of disks would probably work best), the only real requirement is that you need two separate drives. One will hold the operating system and other stuff that normally sits on the “C:” drive and the second will become our deduplicated backup storage area. Once you have a server and your raid groups setup load Windows 2012 on to it. For this how to I am using Server 2012 R2, then configure the server with the basic settings you would normally set such as hostname, IP address, domain settings, RDP, etc etc.

Next we need to add the deduplication features as well as enable them and format our backup storage. In “Server Manager” click on “Add roles and features”. Click next until you get to the “Server Roles” page, scroll down until you see “File and Storage Services”. Then expand out “File and iSCSI Services”, under this section you will file Data Deduplication, check the box next to it to install it. On the Add Roles and Features Wizard click the Add Features button.


Click next through the rest of the wizard pages and then click Install. Once the installation is complete click Close.


Next we need to configure our backup storage disk to use deduplication as well as get it formatted and online.

Click on File and Storage Services in Server Manager and then click Disks. You should have both your “C:” drive listed as well as your Backup Storage disk that isn’t formatted.


Right click on the disk you want to use as your backup storage and click “Bring Online”.

Then right click on the drive and click New Volume. A wizard will start, click next on the first page and then select the disk you want to format on the second page, then click Next. A box will pop up telling you the drive will be formatted, click OK to proceed.


The next page will allow you to set a size of the new volume. Unless you have a good reason not to, set the size to the maximum available.


Next assign a drive letter.


The default settings of NTFS and “Default” Allocation Unit Size work fine, so just give your volume a name and click Next.


Next we get to the actual deduplication settings. First Enable dedupe by selecting “General purpose file server” from the drop down. Then select the number of days you want to keep data “undeduplicated”. This settings is important, as it will determine when one Veeam session gets deduped among the rest of the sessions. If you have enough storage to store at least 2X the amount of a complete full backup of everything + 1 week of changes then you can set this to 6 or 7 days. This will ensure that at least one full backup is not deduplicated and your SureBackup and normal restores will happen as quick as possible. If however you are tight on space, you can set this as low as 1 day. This will yield the most space savings, but at the cost of slightly slower SureBackup jobs and normal restores.


The last thing you will want to setup before clicking Next is the deduplication schedule. Click the Set Deduplication Schedule button. This step basically allows you to control when the dedupe process has priority over everything else. For me I pick a start time of 5AM, and allow it to run for 10 hours. This will allow Windows to make dedupe a priority from 5AM to 3PM every day…. which will work perfect since my backup will happen from about 7pm to midnight. You can adjust as needed.


After clicking Next you can review what will happen and then click Create to start the processes. After the drive is ready to go you can click close.


If you click on the disk in Server Manager you can now see the deduplication ratio and deduplication savings at the bottom. Right now it will obviously not look very cool since we have no data.


Configuring Veeam

I wont take the time to walk you through a Veeam Backup installation since there are other articles on my blog that do that. Once you have Veeam up and running though, we need to configure the backup repository to point to our Dedupe drive. To do that open Veeam and head over to the Backup Infrastructure section. Click on Backup Repositories on the left and then right click in the white space on the right, select “Add Backup Repository”.

Give you repository a friendly name and description then click next.


Click Next on the Type page, as “Microsoft Windows server” is what we want.


On the next page click “Populate”, and then select the drive you created in step one. Then click Next.


On the repository page click on the Advanced button and select the boxes next to the two dedupe friendly settings. Then click OK and Next to proceed.


Go ahead and leave all of the vPower settings alone unless you have a reason to change them. And then click Next and Finish to complete the wizard.

Lastly while you are still on the Backup Repositories page, we will delete the “Default Backup Repository” just so you don’t accidentally select it for a job. However before we can delete it we need to reconfigure where the configuration backups will land. To do this click on the Blue drop down menu in the top left corner and then select Configuration Backup.

Change the backup repository to the new Dedupe Store and click OK.


Now you can right click on the “Default Backup Repository” and select Remove.

You can now configure your backup jobs and use the new Dedupe store to store your backups. To see the savings that the deduplication of Windows is saving you, refer back to the disk section of Server Manager after the number of days you configured to wait for files to dedupe. In my example I will need to wait a week before I will see any benefits.

Stay tuned for the second post I’m working on where I will test using DFS-R to replicate the Dedupe Backup data to another Windows 2012 Server. Hopefully It will only replicate dedupe data and not rehydrate the backups before sending them to its DFS-R partner, but we shall see.


Veeam 7 with Dedupe Appliances

I’ve wrote about Veeam in combination with a few different types of deduplication appliances, both Exagrid and Data Domain. One thing that I always hear… no matter which dedupe appliance brand … is the concern about Sure Backup jobs, restore times, and just the general idea that things will be slow once deduped. So after getting an email from Stephan I decided to investigate his idea a little deeper, so if this works for you, thank Stephan.

While I have always said that if a deduplication appliance is sided properly for an environment restores will still be faster than tape, and generally acceptable. However for some it seems that only raw disks will suffice their need for speed. However with the new features of Veeam 7 you can use both raw disk as well as a dedupe appliance and get the best of both worlds.

Veeam introduced the ability to copy backups to a secondary location as well as define retention schedules for that secondary location in a format in which many customers were used to (Grandfather, Father, Son). Typically this feature would be used if customers wanted to offload their backups to tape, but we can leverage it for offloading our backups to a dedupe appliance as well. As for a primary location, we can select local disk or any raw disk target that we want so that we can feed our need for speed.

In the screenshots below I am working with a virtual Veeam server that has a 500GB vmdk attached to act as my primary storage target. For secondary storage I have an EMC Data Domain 620 appliance. The goal: Do primary backups to the local attached storage of the Veeam server so that restores as well as SureBackup jobs can be ran from raw disk for super fast speeds. The second goal will be to copy the backups to the DD620 appliance and retain the last 7 daily backups as well as the last 4 weekly, 1 monthly, 1 quarterly, and 1 yearly. In short raw disk will be used for short term storage (say 3 days of retention) and the Data Domain will hold everything (including much older backups).

Step 1: Initial Job Setup

First lets setup our Veeam job to backup our VM’s to the local storage. Do this just like you would do any other job, don’t worry about selecting a secondary location yet.

Give your local backup job a name
Give your local backup job a name
Next add your VM's to the job
Next add your VM’s to the job
Now select the local repository and number of restore points to keep
Now select the local repository and number of restore points to keep
Before clicking next, go into the Advanced section and select Reverse Incremental. This will ensure that you only have one large file on your local storage.
Before clicking next, go into the Advanced section and select Reverse Incremental. This will ensure that you only have one large file on your local storage.


Setup VSS Processing if needed (if it has anything to do with Active Directory, SQL, or Exchange... do this for sure)
Setup VSS Processing if needed (if it has anything to do with Active Directory, SQL, or Exchange… do this for sure)
Setup Scheduling, typically once per day, but set as needed
Setup Scheduling, typically once per day, but set as needed

At this point you can run the backup job and should make sure it runs successfully. Once it runs proceed to step 2.

Step 2: Setup Copy Job to Dedupe Appliance

Now that we have done our backup job we need to get the data to the dedupe appliance, right click in the backups window and select “Backup Copy…” to start the copy wizard.

Name the copy job as well as select how often a restore point should be copied
Name the copy job as well as select how often a restore point should be copied
Next select Add on the right and then pick "From Job". Select your backup job that you just created in step 1
Next select Add on the right and then pick “From Job”. Select your backup job that you just created in step 1
Now we select our secondary target location, So the dedupe appliance. Also you can specify how many restore points you want to keep in addition to how many restore points you want to archive. So if you want 14 days in addition to your archives you set it up like this (just pretend that your short term backups dont exist)
Now we select our secondary target location, So the dedupe appliance. Also you can specify how many restore points you want to keep in addition to how many restore points you want to archive. So if you want 14 days in addition to your archives you set it up like this (just pretend that your short term backups dont exist)
You can select direct transfer for this, unless you only have one data domain and it is at a remote site. Then you should opt for Veeam Enterprise Plus and use the WAN accelerator features
The copy job will watch for new restore points on the local backup job and when it finds one it will copy it. However if you want to limit when it can do copies you can specify that here. (if you appliance is local dont use this, but if its across a WAN you might want to block out normal business hours)
Click Finish 🙂

The Results

If you navigate over to the “Backups” section on the left you will see a list of all of the jobs and when you expand the jobs it will show you the VM’s in it as well as the restore points. As you can see Veeam not only keeps the “local” copy we created in step 1 in the backups section, but it also keeps the “deduped” copy and archive retention in this inventory as well under your copy job we created in step 2. So you now have two places to restore from depending on how far back you need to go.

Both a Local and Deduped option for doing restores
Both a Local and Deduped option for doing restores

As your retention builds you will start to see that the local copy only keeps around 3 restore points (because thats what we told it to), but the DD620 job continues to add restore points.

Retention building on Data Domain, while local disk holding at 3 restore points
Retention building on Data Domain, while local disk holding at 3 restore points

Sure Backup

So to answer Stephan’s question… will it make a difference for SureBackup? The answer of course is …. Maybe.

If you have a dedupe appliance that is over worked and cannot keep up in terms of CPU or spindle capacity, it will of course make a HUGE difference. But if you bought a monster dedupe appliance for your needs, then the difference may be minimal if any at all.

Because I’m lazy and wanted to get this post out, I did an entire VM restore from both the data domain and from the local disk. Here is the differences.

Data Domain 620:

Data Domain Restoring a Terminal Serve VM
Data Domain Restoring a Terminal Serve VM

VMDK backed with 900GB 10k SAS drives:

Restoring the same terminal server, but from a local VMDK stores on SAN storage
Restoring the same terminal server, but from a local VMDK stores on SAN storage

As you would expect the local restore from non-dedupe, non-compressed storage is 2x faster. So in conclusion I would say that if you have plenty of storage space available on your SAN, or local storage on a repurposed server, then why not take advantage of the new secondary location feature in Veeam 7. The rule of thumb is to only keep a few of the latest restore points on that disk and make sure to use reverse incremental so you don’t run out of space too quickly!

As for SureBackup… I would say it’s safe to say that if you get 2x the performance from a restore, then you will certainly get faster results from SureBackup jobs too.


Veeam Support for vSphere 5.5 Arrives!

Veeam has officially released 7.0 R2 … which includes support for VMware vSphere 5.5 and Microsoft Server 2012. I was able to apply the patch and start backups for a customer running vSphere 5.5 this morning, so it does seem to work.

New Features:


  • vSphere 5.5 support, including support for 62TB virtual disks and virtual hardware v10 virtual machines.
  • vCloud Director 5.5 support.
  • Support for Windows Server 2012 R2 and Windows 8.1 as guest virtual machines (VMs).
  • Added ability to limit maximum amount of active VM snapshots per datastore to prevent it from being overfilled with snapshot deltas. The default value of 4 active snapshots can be controlled with MaxSnapshotsPerDatastore (REG_DWORD) registry key.


  • Windows Server 2012 R2 Hyper-V and free Hyper-V Server 2012 R2 support, including support for Generation 2 virtual machines.
  • Support for Windows Server 2012 R2 and Windows 8.1 as guest virtual machines (VMs)
  • Support for System Center 2012 R2 Virtual Machine Manager (VMM)
  • Support for the installation of Veeam Backup & Replication and its components on Windows Server 2012 R2 and Windows 8.1.

Aside from the major stuff like vsphere 5.5, vcloud director 5.5, and hyperv 2012 R2 support….. one of the coolest things I see is the ability to limit the number of VM snapshots. This should save customers a boat load of problems.

You can download the new patch from the KB article site here.


Veeam V7 vCloud Director Preview

Veeam has been advertising some of their Veeam Backup and Replication V7 features for a while now, but I finally got access to the beta and was allowed to bring you a preview of some of the new technology. This post is about how you can use Veeam V7 to protect a vCloud Director environment, and I figured what better test lab environment to protect than the Hands on Lab cloud!

From my initial testing it looks like Veeam will need to be installed locally to the vCloud Director install. By that I mean that if you are using a public vCloud you will not be able to use V7 to backup those VM’s in your public vCloud…. Instead your provider could use Veeam to do it for you though. The reason for this is because Veeam still needs to talk to vCenter server for backing up vCloud VM’s, therefore if you do not have access to the vCenter server you wont be able to do backups.

Disclaimer: Remember the information and screenshots are from a BETA release of V7 and some features may change, so this is strictly for a high level overview so that you can get a first look at how it will work. I make no guarantee that they wont change all of these before the GA release.

Setting up V7 for vCloud Protection

Just as in previous releases of Backup and Replication the first thing we must do is tell Veeam about our vCloud Director install and our vCenter server.

Step 1. Start by navigating to the “Backup Infrastructure” section on the left and then click on “Managed Servers”. From there right click on manage servers and select “vCloud Director” from the list of servers to add.

Select vCloud Director from the bottom of the list
Select vCloud Director from the bottom of the list

Step 2. Now fill out the Name form so that Veeam knows the URL of your vCloud Director server.

Fill in the vCloud Director servers FQDN. It should automatically generate the vCloud URL
Fill in the vCloud Director servers FQDN. It should automatically generate the vCloud URL

Step 3. Next we need to give Veeam the credentials to login to vCloud Director. Veeam has a new credential manager that helps to organize all of your passwords and usernames which makes it much quicker to setup things.

Fill in credential for Veeam to access vCloud Director
Fill in credential for Veeam to access vCloud Director

Step 4. Next up Veeam will detect the vCenter server that is connected to vCloud Director and ask for vCenter Credentials

Next up fill in credentials for the vCenter servers which are connected to your vCloud Director
Next up fill in credentials for the vCenter servers which are connected to your vCloud Director

Step 5. Veeam will automatically connect to the vCenter server that is needed to do backups of your vCloud Director infrastructure. After it has completed view the summary page and then click finish. Next we will create a backup job.

Veeam will automatically add in your vCenter server after you give it credentials
Veeam will automatically add in your vCenter server after you give it credentials

Creating a backup job for vCloud Director objects

Step 1. Navigate back to the “Backup and Replication” section of Veeam. Then right click in the right pane (or select backup from the top ribbon), this will start the backup job wizard. On the first page give the job a name.

Name the backup Job
Name the backup Job

Step 2. Select the objects you want to protect. This part is pretty cool, you can select an entire vCloud Director environment, or drill down as granular as you want… all the way down to an individual VM in a vApp. (or anything in between)

Select whatever vCloud object(s) you want to backup.
Select whatever vCloud object(s) you want to backup.

Step 3. Here we will need to specify  which backup repository to put this data in as well as any of the advanced settings that you might want to change if using a Data Domain or other deduplication box.

Typical Veeam Backup settings
Typical Veeam Backup settings

Step 4. If you need to have application consistency then you will want to fill in the credentials to do that on this typical Veeam slide.

Fill in Vss information if you like
Fill in Vss information if you like

Step 5. Finally setup the proper job schedule for the protection you need. (Again a typical Veeam Backup step you are already used to). Then click Create and finish…

Setup a backup schedule as needed.
Setup a backup schedule as needed.

Restoring vCloud Director Objects

Step 1. Select the “Restore” button from the top ribbon and then select “vCloud” to start the wizard.

Click the restore button and select "vCloud" to start the restore wizard
Start the restore wizard

Step 2. Pick whether to restore an entire vApp or a single VM

Select whether to restore an entire vApp or just a single VM
Select whether to restore an entire vApp or just a single VM

Step 3. Select the vApps you want to restore

Select the vApp you want to restore
Select the vApp you want to restore

Step 3. Next you can select which restore point you want to restore.

Review the list of vApps to restore as well as select which restore points to use

Step 4. Restore to a new location or to the original location?

Select restore location

Step 5. Enter a reason for restoring if needed


Step 6. Review the restore settings.

Review the settings for the restore
Review the settings for the restore

Step 7. Monitor the restore process.

monitoring the restore process
monitoring the restore process

Stay tuned for an article on the new Tape device support… i just need to find a decent tape drive first 🙂