• Facebook
  • RSS Feed
  • Instagram
  • LinkedIn
  • Twitter
Jan 172013

In many Enterprise Vault environments there comes a time when data has to be moved.  This can be because of a number of things such as:


* Server becomes underpowered as Enterprise Vault usage increases

* Storage locations which seemed adequately sized during the design and implementation begin to reach capacity

* New, better, faster underlying storage mechanisms are purchased and need to be added in to the environment

* Virtualisation of the environment becomes necessary


Whatever the reason Enterprise Vault administrators are then face with a large amount of data to copy.  Not only is it a large amount of data, but it is also the ‘dreaded’ huge quantity of files.

In this article I will explain a little bit of testing that I did with regards to different ways to copy data, and indicate a ‘winner’.


Environment Details

I have the following:


114,834 files in “C:\Enterprise Vault Stores”

2.48 Gb of data


This is my Enterprise Vault server – and it is running Windows 2008 R2 x64, and at ‘idle’ for each of the tests.

I am copying that folder path to another virtual machine, which lives on a different physical SSD.


Windows File Copy

This is simply dragging and dropping the folder from one Windows Explorer window to another… and timing the copy.  The results of that are:


Run 1 = 33 minutes 46 seconds

Run 2 = 34 minutes 0 seconds

Run 3 = 33 minutes 30 seconds


Average = 33 minutes 45 seconds


Not particularly fast – and that’s what we expected because of the number of files.


As a comparison I also have a single 2.5 Gb file and that takes 1.5 minutes to copy.


Command Line XCopy

I expected this to be a little faster, because there is no fancy GUI that needs to be updated.


Run 1 = 35 minutes

Run 2 = 28 minutes

Run 3 = 34 minutes


Average = 32 minutes and 20 seconds



Robocopy is almost an ancient tradition when it comes to file copying and Windows.  It has many, many options that can be used to copy all sorts of additional information like dates/times, security settings, and so on.  In my example I used it as simply as possible, the results were:


Run 1 = 33 minutes 46 seconds

Run 2 = 34 minutes

Run 3 = 33 minutes 30 seconds


Average = 33 minutes and 45 seconds


It does have the advantage though that if the copy is interrupted it can be picked up again from where it left off, pretty quickly.


FSAMigrator from QUADROtech

This is something that I thought I would try to see how it performed.  I heard that it was quick, and had other features that would help with very large copies (like the ability to schedule a ‘window’ where the copy takes place).  The results of this test were:


Run 1 = 11 minutes 40 seconds

Run 2 = 13 minutes

Run 3 = 12 minutes 35 seconds


Average = 12 minutes and 25 seconds



The clear winner in terms or time taken to push the data from A to B is FSAMigrator from QUADROtech.  This tool can be used to migrate online data (as I have done) though it’s primary purpose is really related to placeholders (offline data) AND online FSA data.  Not only is it the winner from the point of view of the time it took, but it also has some other fantastic features:


* Schedule-able

* Massively multi-threaded, and the number can be configured in the GUI

* Resumes from where it left off when it starts its next schedule

* Can be used to mirror the data (so if the data has changed between one run and the next, it can ‘synchronise’ those changes)

* Fantastic GUI for viewing the progress of the copy


I’m not saying that some of these things aren’t possible with the other methods tested above but to have all these in one place is awesome.






If you enjoyed this post, please consider leaving a comment or subscribing to the RSS feed to have future articles delivered to your feed reader.

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>