Configuring Data Domain SMT (Secure Multi-Tenancy)

One of the new features in DDOS 5.5 is SMT or Secure Multi-Tenancy, basically it just allows you to define what users have access to what MTree’s. Getting it working though is a little bit of a mystery though as there was only one document I could find on how to configure it, and checking the web gui for hints was worthless. With that said you will need need to get an SSH client from here.

What does SMT add?

SMT adds a few new things to the standard Data Domain system including

  • Tenant Units (think company name)
  • Tenant Administrators
  • Tenant Users
  • Tenant DD Boost Users
  • Tenant Storage Units (MTree’s that are only readable and writable by people belonging to that tenant)

But at a really high level all you are really doing are creating users and MTree’s and using ACL’s to control who can talk to who.

Enabling SMT

By default none of the multi-tenant features are enabled on a Data Domain system, but turning them on is just a single command. Using Putty or a similar program connect to your Data domain and login with sysadmin. From there type the following command to enable SMT.

smt enable

That’s it! Now all of the SMT features are enabled and we can start using them.

Creating a Tenant

In order to write data to the system through a tenant account we need to have an MTree for the data that is part of a tenant unit as well as a user account that is also associated with that tenant unit.

To create a tenant run the following command, but replace “jpaul” with what your tenant name is.

smt tenant-unit create jpaul

01-tenant-unit

Creating Tenant Users

Next we need to create our tenant administrator and tenant user accounts. These are the two standard account types associated with SMT tenants. After creating the two accounts we assign them to the tenant.

Commands are (replace the bold words with stuff from your environment):

user add jpaul-admin role none

user add jpaul-user role none

smt tenant-unit management-user assign jpaul-admin tenant-unit jpaul role tenant-admin

smt tenant-unti management-user assign jpaul-user tenant-unit jpaul role tenant-user

02-users

Creating Boost User and Storage Unit

Next we need to create a tenant user account that will be used for DD Boost connections (if you don’t plan to use DD Boost I would still create a storage unit with the boost command if you are licensed for it, because you can never turn on boost, but you can turn on NFS and CIFS access later).

First create the boost user account just like before

user add jpaul-boost password password role none

Then assign the detault tenant option for the new boost user account, according to the guide I read this makes sure the boost storage unit gets assigned to the correct tenant when we create it in the next step.

ddboost user option set jpaul-boost default-tenant-unit jpaul

Now lets create a boost storage unit and assign it to the user and tenant.

ddboost storage-unit create jpaul-storage01 user jpaul-boost tenant-unit jpaul

That last command will create a storage unit called “jpaul-storage01” and assign boost user jpaul-boost to it, and it will be owned by the jpaul tenant.

03-boostuserand storage

 

Lastly EMC recommends you set the distributed segment processing option, but other than that you are done and ready to connect to your storage from your backup application.

ddboost option set distributed-segment-processing enabled

That’s all you need to do for setting up a tenant, associated users, and a storage unit.

Connecting to SMT Boost Storage

I mostly work with Veeam Backup and Replication so that is what I will walk through configuring, but any DDBoost aware backup platform like Networker, or Avamar, or Symantec should be ready to connect… just make sure to follow their best practices and use the user credentials you created above.

For Veeam the process is exactly the same as if you were connecting to a non SMT ddboost share.

More Information

All of the information I found to get this going is from the Data Domain SMT with Networker integration guide which can be found here. If you take a glance at that it also explains how to do some general reporting and stuff. Definetly a good resource… which is good since it was the only thing I could find.

Veeam & Data Domain: Advanced Setup and Replication

Recently while working with a customer we had to get a little creative with an installation and I thought I would share the experience.

Here is a little background info…

Datacenter and Hardware Layout:

  • Customer has 2 datacenter: one on the east coast, one in the midwest
  • Customer has vSphere clusters at both datacenters
  • Customer has purchased one new Data Domain 2500 for each datacenter
  • Customer has purchased Veeam licensing for each datacenter
  • Both vSphere Clusters run some production workloads
  • Replication pipe between sites is 1Gbps with triple redundancy

Customer Requirements:

  • Cross site replication of backups between datacenters
  • Ability to quickly restore a backup from datacenter A at datacenter B (and vise versa)
  • Ability to do restores locally at either site without connectivity to the other datacenter
  • Must use DD Boost to increase backup speed

Given this situation we had a few options:

  1. Install a “master” Veeam server at one location or the other and then create several proxies at each location
  2. Install a “master” Veeam server at each location as well as some proxies

There are advantages and disadvantages to each setup. And we actually tried both. The only real advantage to having a single veeam server is that all jobs were in one pane of glass and all restores could be started from one area. The disadvantages were much more obvious.

What we found is that because the file level restore happen on the “master” Veeam server it took incredibly long to mount backup images from the east coast Data Domain to our midwest Veeam Server. We were getting about 50Mbps on the wire while the mount operation happened… but from the local data domain we were getting almost 300Mbps.

Now for full VM restores I don’t think it would be quite as bad because we can specify a proxy at the remote site to do the full vmdk restore in hotadd mode… that will keep all the data local and it.

The solution

Because we had Data Domain replication in place I decided to see if there would be a way to mount the replicated copy in read only mode to the local Veeam “master” server. The idea is pretty simple… Use a Veeam “master” server at each site for local backup and restore as well as ALWAYS have restore’s pulling data from their local site’s data domain regardless of where the VM was originally backed up.

So here is how it works:

East Coast Datacenter:

Local Veeam “master” server does backups via DDBoost to its local Data Domain in an MTree called “veeam-east”. That MTree then uses native Data Domain replication to the Mid-West Data Domain to an MTree called “veeam-east-replica”.

Mid-West Datacenter:

Local Veeam “master” server does backups via DDBoost to its local Data Domain in an MTree called “veeam-midwest”. Then that MTree is cross replicated to the east coast DD MTree called “veeam-midwest-replica”.

So that takes care of backing up locally, and getting offsite replication… but the cool part is how we can do restores of data from the opposite datacenter…

The replicated MTree’s are placed into a read only mode by default. However! the Data Domain box WILL still let you share that read only folder via CIFS or NFS. Sooo I thought why not just share out the replicated MTree’s via CIFS and have the Veeam “master” mount that share. Sure enough this worked.

So the east coast Veeam “master” server has three backup repositories:

  • the default backup repository
  • one mounted via DDBoost integration called “Veeam-east”
  • one mounted via CIFS called “Veeam-Midwest-Replica”

The midwest Veeam “master” has the mirror of this:

  • a default repo
  • one DDBoost integrated share called “Veeam-midwest”
  • one CIFS mounted repo called “Veeam-east-replica”.

There is only one caveat to this setup. Because Veeam is not actively writing backup data to the CIFS read only share, it does not automatically import new backups that have been replicated. So you do have to preform a manual “Re-Scan” of the CIFS repo so that it pulls in the latest restore points. (However this can be automated with this powershell script http://helpcenter.veeam.com/backup/80/powershell/sync-vbrbackuprepository.html).

So here is what things look like: (click for pdf version)

Veeam Layout

Basically what I have tried to do here is show the backup data path. You will notice a line with arrows at each end between the local Veeam master server and its DDBoost repository, meaning primary backup and restore data flows between them, then a directional arrow between the primary MTree and the replicated MTree showing the direction of DD replication. Lastly is a dashed directional line showing that “if” a restore of data needs to be done at the opposite site, the local Veeam server has a read only copy from the replicated DD MTree.

I should mention that the only reason that you would want to use the Read only copy to do a restore is if your primary site if offline and you need your data back. I didn’t envision this to be a replacement for datacenter migrations… however with some more testing it may actually be a decent option.

Other thoughts

After seeing this particular customers requirements I do see some weakness in Veeam for a multisite deployment. But I’m not sure how many other backup products would really handle it any better. My suggestions would be to somehow use Veeam Enterprise Manager as a “vCenter Linked Mode” type controller. Combine that with DDBoost multisite replication (meaning that because of DDBoost Veeam could control the replication from one site to another at the DD level) and you could then create a multisite backup catalog…. then ideally all Veeam servers would know where all copies of the data are and be able to restore from them without any special hack like the one above.

Other advantages of this would be that restores during a disaster could be faster provided that all backup catalog data was replicated like active directory data to the other Veeam Master servers …

What’s new with EMC Data Domain

Just a quick note: take all of the stats with a grain of salt until I can get access to the slide deck, as I was taking notes as quickly as possible while they were talking.

This week at EMC World 2015 Caitlin Gordon from the Data Domain Group announced the new DD9500, the replacement for the DD990. It will ship with DD OS 5.6 and process up to 58.7TB/hr when combined with DDBoost (27.7TB/hr without it).

The new DD9500 leverages both flash and traditional disk and provides two tiers of storage, one level for mission critical data and one for archive and long term data. In terms of size they say that it will but upwards of 86.4PB of logical storage (ie when you factor in dedupe) and compression. It also allows up to 1080 streams for backup jobs, replication jobs, etc.

Announcements were also made to the DD2200 line as well. Specifically a new capacity size of 4TB aimed at replacing the DD160. List price is said to be about $9k on that entry point, but it will allow for future upgrades to 7.5TB or a 13.2TB footprint.

Processing rates on the DD2200 are 4.7TB/hr although they did not say if that was with Boost or without boost, and it has up to 60 streams.

Probably the biggest part of the Whats new with Data Domain session, at least for me, was the announcement of Project Flacon. Project Falcon is a DD virtual appliance meant for remote offices or cloud provider environments. But lets face it, this will be bad ass to get into my lab. Can’t wait to put it up against the HP StoreOnce Appliance as well as the Quantum DXi Virtual appliance too.

There were some other portions of the announcement around DD Boost for HPUX and AIX via Networker and enterprise applications. As well as encryption at rest for DD extended retention.

Other announcements include support for backing up Hadoop data lakes and NoSQL via distcp as well as from isilon managed snapshots.

Lastly they talked about Protect Point, and its ability to do direct backups from enterprise applications to the Data Domain box. What I thought was cool about this solution was its ability to let the application owner drive all of the functions, from kicking off a backup to doing item level restores.

 

Boosting DR Replication with SilverPeak

It’s always an anxious day for me when I have to take a customer’s DR hardware to their DR site… not because I’m carrying anywhere from $100-400k dollars worth of customer equipment in my SUV (that’s the insurance guy’s problem)… but because calculating DR replication is almost a black art sometimes. Sure you can get pretty close using historic graphs and stuff like that… but at the end of the day you cant plan for everything. I mean what if Mr D. A. Taskworker decides his MP3 collection deserves to be backed up to the file server.

The obvious answer (if I were Santa Claus) would be 10 gig wan connections for everyone! But I’m not, so I decided to try out another WAN optimizer to see what it could do in the real world.

The Product

This time it’s Silver Peak’s VX WAN Optimizer. It comes as a virtual appliance which makes it super easy to get installed, and since I like to only use it for DR replication the “server mode” option works perfect and doesn’t require me to involve a network guy at all.

Before we get into the meat and potatoes lets look at what the Throughput Calculator  (found on the bottom right side) suggests we will be able to get. Oh and in this example the customer has a 20 meg WAN connection, but we are going to be limiting replication to 10 or 15 meg. Anyhow the calculator, I entered the parameters I would be using and here were the results.

Settings: 10Mbps, VPN over the Internet, for Replication. Results… Silver Peak claims my 10mbps will look like 60Mbps!

calculator

 

Setting up Silver Peak

After you get the OVF file downloaded deploy it with the vSphere Client. Once it is on your cluster at each side (Prod and DR) fire up the appliances. If you have DHCP out there it will grab an address, to find out what that address is login to each with admin/admin. No need to do anything else unless you dont have DHCP, if you dont then you can follow the VX setup guide found on the Silver Peak website to apply an IP address.

After we have the IP address we can go to the web interface and do the remainder of the configuration from there. After you login an initial configuration wizard will start its pretty simple to walk through.

Typical welcome screen

wiz1

On the second page you enter the hostname for the site as well as you can change it to a static IP.

wiz2

Next set your NTP settings, change admin password if you wish, etc

wiz3

Remember me telling you that this was going to sit on the network like a server ? Just click next.

wiz4

On the next step it shows you a pretty picture of how things will work, and allows you to set your bandwidth cap. We can change this later if needed btw.

wiz5

On this last page we need to add in the remote (DR) site’s IP address… even if you do not have that virtual appliance setup yet, I would still recommend you put it in. That way once you do the other virtual appliance setup wizard the tunnel between sites will come up automatically.

wiz6

After that step finish the wizard and then repeat the process for your other site.

Verify that the tunnel is up

Before traffic has even the slightest change of being optimized we have to make sure that the tunnel between sites is up. If it doesnt come up traffic will still make it over there, but it will not be optimized. To do this go to the Configuration Menu and select Tunnel. There you will see a list of tunnels, in this case we only have one but as you can see it is “up – active” which is what we are looking for.

tunnel

Configuring Zerto to use Silver Peak

Because we have Silver Peak sitting on our network, all we need to do is change the default gateway on our VRA’s to the Silver Peak address. Literally that is all. I suppose you could change your ZVM gateway to use Silver Peak as well, but in my testing I didn’t… and I’m not sure it is really needed since data is transfered between VRA’s only.

Login to Zerto and click the VRA tab (if you are running 3.1, otherwise click the gear cog at the top then Manage VRA’s). Then select one of the VRA’s and click Edit. Change the Default Gateway to the Silver Peak IP address and click ok. Repeat this for ALL VRA’s at both the Prod and DR sites. After doing so it will take the Zerto interface a few refresh cycles to come back to all green. Once it does and replication resumes you can head back over to the Silver Peak interface and check the stats to see how things are doing. (Click to enlarge screenshot)

vra change

For this particular project there were also Data Domain boxes at each site too. So we are also going to redirect that traffic through Silver Peak as well. You can either change the default gateway on the Data Domain’s to be the Silver Peak virtual appliance, or you can create a static route to the sister box and tell it to use the Silver Peak address just for that static route… the choice is yours.

The Results

Well I wont keep you waiting any longer, and I really hate squash your hopes of 60Mbps… but in my testing (with real data at a real company) the results were much less impressive honestly.

While I did see spikes as high as 40-50Mbps, consistent speeds were much lower. the VX appliance claims that it is saving me about 17-20% on the Zerto and Data Domain streams… so still not too bad considering that most of the data will have already been deduplicated and compressed. Below are some supporting screenshots.

Conclusions

There is a saying in the car enthusiasts world that says “there is no replacement for displacement” meaning that at the end of the day things like super chargers or turbo chargers or chemical power adders can only do so much. The same is true in terms of bandwidth. We can add things like Silver Peak, or Netex HyperIP, or Riverbed, etc etc. But at the end of the day nothing can really replace just having a huge WAN connection.

Silver Peak was able to give me a 20% boost… so I certainly am not complaining. In this example the customer is paying approximately 25$/1Mbps at their DR site. So if I can add 20% more bandwidth that brings us up to about 18Mbps… saving us 75$ per month… if you double that (75$ at the Colo and $75$ at HQ) it would take about a year to 18months or so to see our ROI for Silver Peak.

Another added benefit of using Silver Peak aside from giving us more bandwidth is the ability to classify our traffic and then QoS it. Meaning that if I want to use 15 Mbps for replication be it Zerto or Data Domain, I can tell the VX appliances to only use 15 Mbps… and then I can configure QoS on the VX appliance to make Zerto traffic a higher priority than the Data Domain replication when there is congestion on that 15 Mbps. Again… all without involving one Cisco engineer 🙂

Upgrading Data Domain Appliances without Sacrificing Capacity

Data Domain 620 appliances come in two flavors, a 12TB version and a 7TB version, for both versions the chassis is the same only the drive count is different. For the 12TB version you get 12x1TB drives in RAID 6, and for the 7TB version you get 7x1TB drives in RAID6 (both versions use 1x1Tb drive for a hotspare). Usable capacity according to the DD Hardware guide is about 7.8TB for the 12 drive model and 3.3TB for the 7 drive model. The interesting part is when you want to upgrade your 7TB model to the 12TB model….

I was reading through the DD620 hardware guide the other day when I noticed a 2TB difference in the capacity of a DD620 that was purchased with all 12 drives from the factory versus a unit that was purchased with 7 drives and later upgraded to 12 drives. (As you can see this same issue applies to DD160 boxes that are upgraded as well but you only give up about 1TB instead of 2TB)

Notice anything about the “7+5 drive” configuration ? Almost exactly 2TB less than if you would have purchased all 12 drives to start with.

I assume this is because initially the Data Domain is configured with 1 hot spare drive, and 6 drives in a RAID6 configuration… giving you about 3.3TB usable after formatting, and then when you add the 5 new drives the unit will configure them as a new RAID 6 array giving you a 3+2 drive config… which is about 2.5TB usable after formatting. So total you end up with about 5.8-5.9TB usable for storing your backups. Basically instead of only losing 2 drives for parity like the factor unit does, your going to lose 4 drives total for parity after you add the new drives.

Since I like to get the most for my money, I opened a ticket with Data Domain to see if it was possible to bypass this upgrade penalty by reimaging the DD Operating System after installing the new drives. They confirmed that reimaging the system would be the only way to forgo giving up that extra 2TB of usable space. So if you are in the market for upgrading your Data Domain DD620 or DD160 I would make sure you contact support and have them get you access to the reimaging USB software.

So now your probably thinking great…. all i have to do is reimage the box and kill all my backups to get that extra 2TB… and you would be right, but if you have a pair of data domain’s you can probably get around losing the data.

First upgrade your primary DD appliance with the new drives and do the reimage process.

Next re-establish the replication pairs but in reverse order so that your DR site box sends all of the data back to the primary box. You will probably want to bring your DR box back to HQ for this.

Then after replication is complete you can unpause your backup jobs and start the upgrade on the DR box

Once that box has been upgraded with the new drives, re-enable replication in the proper direction and let the data re-replicate back to the DR box.

At this point both boxes should see all 7.8TB of space and you didnt lose your backup data!

 

Note: I have not had to do this yet… but if I ever have to do this in the “real world” I will certainly post my results as an update.

Veeam and Data Domain: A Case Study

This is the story of a customer who recently implemented Veeam and Data Domain; all names have been changed to protect the innocent, etc etc 🙂

Background

“BOB’S BAIT AND TACKLE” was running VMware 4.1 on a mostly HP environment, their SAN was an HP EVA4400, and the VMware servers are HP Blades. For backup they were using HP Data Protector software along with an HP tape library, which was no longer able to meet their backup window requirements.

With a VMware upgrade on the horizon and with SAN space dwindling “BOB’S BAIT AND TACKLE” started looking for a replacement solution for their backup and storage environment.

The Problems

“BOB’S BAIT AND TACKLE”s existing infrastructure was limited to approximately 10TB of SAN storage (they store a lot of secret customer info or fish pictures I guess), and backups were limited to the speed of their tape autoloader and backup server. Because of this full backups of their Exchange environment required the entire weekend to run (time that could be spent on fishing), and other servers also took many hours to backup.

Their plans to migrate to a new ERP solution also had to be pushed back because of a lack of free storage on their HP EVA4400 SAN.

The Solution

The IT Team proposed to replace their storage environment with a new EMC VNX5300 fiber channel SAN and replace the tape library and HP Data Protector with a pair of Data Domain DD620’s and Veeam Backup software.

The VNX5300 would include over 45TB of raw capacity, which would give “BOB’S BAIT AND TACKLE” years of growth (and plenty of room to store fishing pictures). It also included advanced features such as Enterprise Flash Drives, aka EFD’s or SSD’s, and the EMC Fast Suite which allows for data to automatically move between tiers of disk for the best performance and cost per gigabyte.

The Data Domain DD620 boxes were chosen because “BOB’S BAIT AND TACKLE” wanted to reduce or eliminate the use of tape storage if possible. To facilitate the decommissioning of tape while maintaining an offsite backup, “BOB’S BAIT AND TACKLE” selected a second site across town which would be connected with dedicated fiber where they would locate their sister Data Domain appliance.

Veeam was chosen as the backup software for “BOB’S BAIT AND TACKLE” because almost all of their servers are now virtualized, and the handful of physical machines that remain are scheduled to be virtualized in the future (they went on a fishing trip and didn’t have time to finish I guess). Veeam also integrates tightly with VMware to provide quicker backups of virtual machines by leveraging VADP (or VMware API’s for Data Protection), and reduces the load placed on the virtual infrastructure as a whole compared to legacy backup applications that introduce agents into the guest operating system.

Results

After completing the upgrades at “BOB’S BAIT AND TACKLE” they are now able to backup not just one of the Exchange DAG members, but all of the servers helping to provide Exchange services including front end servers and MTA servers, all in less than 8 hours. Incremental backups have also been reduced to approximately 2 hours as well.

Instead of a single backup server and tape library Veeam and Data Domain are able to provide multiple paths for backup data to flow through as well. On the Veeam side we have implemented three Veeam proxy servers, and on the Data Domain side we have two CIFS shares and each is attached to its own gigabit Ethernet port. This allows many backup jobs to run in parallel while not affecting the performance of any one job like a single path would.

All of these technologies combined with an optimized configuration by the IT team have led to a shorter backup window and a much faster RPO and RTO. The solution also has the ability to add as much as 4 TB more to the Data Domain boxes to allow for an extended backup retention time in the future.

Real World Data

While most marketing documents show you averages and numbers that are sometimes questionable, this post will use real numbers taken directly from “BOB’S BAIT AND TACKLE”’s environment.

First let’s look at the RAW requirements, the amount of VMware data that Veeam is protecting is approximately 3.7TB. A full backup, after Veeam compresses and dedupes is 1.7TB. The graphs below show the first 30 days of activity for the Data Domain. In that time all Veeam backup VBK and VIB files account for 4.65TB of used space if we would be writing to a non-compressed or deduplicated device. However the actual amount of space used on the Data Domain device is 2.86TB, which is a savings of 1.79TB of space saved in just 30 days, which is a 1.7x reduction or about 40%!

A picture is worth 1000 words

Figure 1: Space Usage

This graph shows three things: compression factor (noted by the back line), the amount of data that Veeam is sending the DD appliance (noted by the blue area), and the amount of data that is actually written to disk after the DD appliance deduplicates and compresses it (noted by the red area).

As you can see the first couple of days there is almost no benefit from deduplication or compression from the Data Domain side. We do start to see the red and blue areas start to separate about day number 7 when a second full backup is taken. Normally we would expect to see the space used grow slowly and not in a step type pattern, however because they started doing backups of virtual machines as they were migrated to the new SAN we see large jumps at the beginning part of the graph. So after the first week of migrations we started those backups, and then after week two’s migrations we started those backups, hence the step pattern.

Figure 2: Daily Amount Written (7days)

This graph shows the total amount of data that is ingested by the Data Domain (denoted by the total height of the bar) as well as the amount written to disk(denoted by the red bar height). The height of the blue area in the bar is the amount of data that was ingested, but considered to be duplicates of data already on the appliance.

The interesting thing to note here is that the amount of data that Veeam sends to the DD appliance on a full backup day (Saturdays by default), and the amount of data that is actually written to the disk. In this example Veeam sent almost 675GB and it only consumed about 27GB on the Data Domain. This is where you see the savings of a dedupe appliance.

Figure 3: Daily Amount Written (30days)

This graph shows the same data as Figure 2; however this is a 30 day view instead of a 7 day view. You can see that at the beginning of the backups we were ingesting large amounts of data (total bar height) and also writing a large amount to disk too (red area of the bar). This is because the Data Domain was seeing a lot of new data that it had not seen before; over the red bars get smaller and smaller while the blue bars stay the same or get larger. As a side note the last two samples on the right show a much larger red area than normal, this is because new machines were added to backup jobs and contained unique data, this is a one-time occurrence and by the next backup we will see much lower rates again.

Conclusion

The combination of Data Domain storage and Veeam Backup software is a near perfect combination for protecting VMware virtual machines. “BOB’S BAIT AND TACKLE” is now able to get good backups without having large backup windows; they are also able to replicate those backups offsite with minimal bandwidth usage and will eliminate the need for tapes once the remaining servers have been virtualized.

Overall “BOB’S BAIT AND TACKLE” should expect to see about an 8 week retention period with the Data Domain configuration today, but they could expect to see as much as a 60 week retention period if the existing appliances are upgraded to their maximum capacity of 12 drives.

Without the Data Domain, Veeam would require us to provide 7TB of storage to store 8 weeks of backups for this customer. If we were to upgrade the DD620 to the 12 drive configuration we could store 60 weeks…. And if we were trying to do that with normal storage it would require 48TB of disk space, instead of 8TB of Data Domain storage.

Price Comparison

To compare prices between a DD620 with the maximum drive configuration and a VNX5300 File Only SAN that has enough drive space to hold 48TB of Veeam data like we were mentioning earlier. Please Note that all pricing here is list price.

Data Domain with max drive configuration: $40,433 per site

VNX 5300 File only SAN with ~48TB usable: $74,227 per site

So the Data Domain is about $34,000 cheaper per site just for the hardware investment, Also the Data Domain is going to require 2u of space where as the VNX will require 11u, so if you have to co-locate your DR box at a datacenter there will be more cost there. Or if you are powering it yourself spinning 45 drives compared to 12 will definitely cost you more.