How to Supercharge your Copy Process

We had six terabytes of data to copy from one NAS device to another, and not enough time to do it. Here’s how we turned on the afterburners to move that data in about thirty hours.

A customer’s Server was down because of a bad NAS device, and unfortunately he didn’t have a hot backup. He needed to copy a lot of data quickly, and he asked us to help. The data set consisted of many 35GB files, 6TB total. He needed to copy it across a 100MB Ethernet fast network to a new NAS device.

The customer had tried copying the data using the Windows copy function, but Windows failed to copy 35 GB data files across local network. It usually failed with an error “Not enough server storage is available to process this command,” though it was not a problem with the actual storage device itself.

A reboot would sometimes provide a temporary fix, copying the first file, but it would fail on subsequent files. There are some registry tweaks posted on blogs online, but none proved useful. Each 35GB file took about an hour to copy. Rebooting for each file was unreasonable.

Then we tried using a variety of utilities including Robocopy, xcopy and eseutil  (an Exchange utility) but all failed with similar errors. (Although Robocopy was able to resume from the point of failure, each file would take 2+ hours to finish the copy due to all the interruptions.)

Research turned up a couple of possibilities. One of them, FTP transfers, never occurred to us. But it turned out that when used in a local network environment, and running multiple threads, FTP is REALLY fast and efficient.

The second possibility was to use a program called “RichCopy,” which was originally written by Microsoft for internal use, and later released to the public as freeware. It is really good at copying huge files. We decided to use both of them.

For FTP we installed a Filezilla server (freeware) and SmartFTP client. Using FTP alone, we were able to move each 35 GB data file in 15-20 minutes. The SmartFTP client can do multiple simultaneous transfers (worker threads.)

But, with 6+ TB of data in 175 files, the copy process was still too slow. So we fired up RichCopy and set it to run with three threads.

Using this combination of tools we were able to copy six terabytes in about 30 hours.

 

About The Author

Avatar
Rob Cosgrove / http://remote-backup.com

Rob Cosgrove is President of Remote Backup Systems, developers of the fully brandable RBackup Online Backup software platform, powering more than 9,500 Service Providers, MSPs and VARs wordwide since 1987. He is the founder of the Online Backup industry and author of several books, the most recent, "The Online Backup Guide for Service Providers", available at Amazon.com and bookstores. http://remote-backup.com