0333 123 1240 info@pro2colgroup.com

Accelerated File Transfer – Extreme Speeds Ahead

Accelerated File Transfer – Extreme Speeds Ahead

If you’re looking to move a large amount of data, a Managed File Transfer (MFT) solution can help with the automation, but, for the main part, it would still transmit the file at the same rate as a traditional FTP client.

There are some delivery protocols now being incorporated into MFT solutions, which will significantly increase the speed of transmission. Unfortunately, there are no open standard protocols for high speed transmission so, at the moment your options are going to tie you to one vendor or another. Typically, a dedicated client is also required, and some of these are not easy to integrate into an automation process.

Software vendors have largely looked to resolve the fast file transfer problem using two differing approaches. File transfer protocol development has either been multi-threaded TCP streams or an expansion of the open source UDP/UDT projects.


Transmission Control Protocol (TCP)

TCP is the underlying network communication protocol used in all standard MFT protocols. Fundamentally, a file or message is spilt up into small packets, which are numbered, checksummed and then sent to a remote server. At the other end of the communication channel the packet is stripped of its header information and then checksummed again. If the checksums match, an acknowledgement packet is returned to the sender to confirm successful receipt of the packet. If the checksums do not match, it is considered that the packet has been corrupted in transit and a non-acknowledgement is sent with a request to send the packet again. From the sender’s end, if no acknowledgement packet is received within a specified time, then the packet is assumed to have been lost and is sent again.

With poor quality networks, it is possible for the original packet to be received intact but the acknowledgement packet to be either corrupted or lost in transmission. This would cause the whole packet to be resent unnecessarily, having an impact on transmission times. In addition, the way FTP works to send files, not all bandwidth is used up early in the cycle, and the rate of transmission is dictated by how much data gets through until failures start to occur.


User Datagram Protocol (UDP)

UDP based fast transfer protocols are becoming more common with at least three major vendors incorporating them into their MFT solutions. These work in a similar way to traditional TCP based transfers, but do not rely on the acknowledgement packet to be received, they assume that the packet arrived in tact. A TCP control channel is kept open where packet retry requests can be sent if there is an issue with a packet.

The UDP based transfers are much more efficient over long distances and poor quality networks. For example, some of the major internet video services, such as Netflix or LoveFilm, embed them into their software to deliver video to customers’ homes effectively.

A little bit of testing…

During software evaluation testing we moved GBs of data from one Amazon data centre in the US to one in Europe, and saw a 40% increase in the speed of transmission when using UDP based transfers. When moving data out of the Amazon environment, we saw even greater speed improvement. Multi-threaded TCP based protocols improved speed based on the “shape” of the data. A single very large file moved quicker than lots of small files as the overhead to negotiate the transfer for each file had a significant impact on the transmission time.

Even with all these considerations, we found both methods improved speed of transfers by a significant amount.  A “raw” FTP transfer took 10 minutes to move a file from a domestic broadband connection, whereas the accelerated protocols took between 7 and 8 minutes.


In summary…

At the moment, it is too early to say if one protocol or solution will win out over the others, but the trend seems to be that UDP based products are slightly faster and therefore being more widely adopted by Managed File Transfer software vendors. A lot may depend on whether any vendor opens up their protocol to be incorporated into other applications.

Moving files, fast. Really fast!

Moving files, fast. Really fast!


Most of the companies that contact us do so looking for guidance on automating their file transfer. Automation is primarily for B2B and predominately involves using SFTP, a simple but typical example might be that an insurance company receives files from a bank and the files need to be moved to another internal system for processing. Of course there are usually a wide range of other complex processes that accompany it, but you get the idea.

Making files fast BIIn recent times however many of our conversation have gone along the lines of, “How can we move REALLY large files?” With the continuing explosion in data creation companies are having problems moving the large sets of data, many that are gigabytes in size.  All variations of FTP can deliver large volumes of data, but when you add in to the mix that it is in a far flung location and you’ve got a dodgy internet connection, suffering from latency and packet loss then FTP becomes pretty much ineffectual. Latency on the line, measured in round trip times (RTT), is a measure of how long it takes for a packet of data to get from point A to B and back again. We won’t go into the reasons in this blog, but suffice to say the longer the RTT the less efficient FTP becomes as can be seen in the graph here.

However all is not lost, there is an resolution to this data transfer problem. A select few vendors have built proprietary protocols based upon the open standard UDP to move data faster. A lot faster! Their protocols generally work the same way in that they maximise the utilisation of the bandwidth available to them by flooding the connection with data. Of course, controls are built in to ensure other network traffic doesn’t suffer. This approach can increase the speed by up to 1,000 times depending upon the network conditions and bandwidth available.

These UDP based solutions are now reaching a level of maturity enabling the software to be used in many scenarios, for example:

  • Disaster recovery and business continuity
  • Content distribution and collection, e.g., software or source code updates, or CDN scenarios
  • Continuous sync – near real time syncing for ‘active-active’ style HA
  • Supports master slave basic replication, but also more complex bi-directional sync and mesh scenarios
  • Person to person distribution of digital assets
  • Collaboration and exchange for geographically-distributed teams
  • File based review, approval and quality assurance workflows

Not only are files getting bigger but the environments within which they are implemented are evolving, the technologies with which they need to interface are changing too. Our expert team analyse complex requirements daily, providing companies of all shapes and sizes with solutions to their tricky file transfer conundrums. If your business needs to share large or sensitive data either in an automated of manual process, we can help. Contact one of our file transfer experts on +44 1202 433415 or get in touch via the web site.