Fast File Transfer Archives - Managed File Transfer Solutions | Secure File Transfer Software | UK

Accelerated File Transfer – Extreme Speeds Ahead

Accelerated File Transfer – Extreme Speeds Ahead

If you’re looking to move a large amount of data, a Managed File Transfer (MFT) solution can help with the automation, but, for the main part, it would still transmit the file at the same rate as a traditional FTP client.

There are some delivery protocols now being incorporated into MFT solutions, which will significantly increase the speed of transmission. Unfortunately, there are no open standard protocols for high speed transmission so, at the moment your options are going to tie you to one vendor or another. Typically, a dedicated client is also required, and some of these are not easy to integrate into an automation process.

Software vendors have largely looked to resolve the fast file transfer problem using two differing approaches. File transfer protocol development has either been multi-threaded TCP streams or an expansion of the open source UDP/UDT projects.

Transmission Control Protocol (TCP)

TCP is the underlying network communication protocol used in all standard MFT protocols. Fundamentally, a file or message is spilt up into small packets, which are numbered, checksummed and then sent to a remote server. At the other end of the communication channel the packet is stripped of its header information and then checksummed again. If the checksums match, an acknowledgement packet is returned to the sender to confirm successful receipt of the packet. If the checksums do not match, it is considered that the packet has been corrupted in transit and a non-acknowledgement is sent with a request to send the packet again. From the sender’s end, if no acknowledgement packet is received within a specified time, then the packet is assumed to have been lost and is sent again.

With poor quality networks, it is possible for the original packet to be received intact but the acknowledgement packet to be either corrupted or lost in transmission. This would cause the whole packet to be resent unnecessarily, having an impact on transmission times. In addition, the way FTP works to send files, not all bandwidth is used up early in the cycle, and the rate of transmission is dictated by how much data gets through until failures start to occur.

 

User Datagram Protocol (UDP)

UDP based fast transfer protocols are becoming more common with at least three major vendors incorporating them into their MFT solutions. These work in a similar way to traditional TCP based transfers, but do not rely on the acknowledgement packet to be received, they assume that the packet arrived in tact. A TCP control channel is kept open where packet retry requests can be sent if there is an issue with a packet.

The UDP based transfers are much more efficient over long distances and poor quality networks. For example, some of the major internet video services, such as Netflix or LoveFilm, embed them into their software to deliver video to customers’ homes effectively.

A little bit of testing…

During software evaluation testing we moved GBs of data from one Amazon data centre in the US to one in Europe, and saw a 40% increase in the speed of transmission when using UDP based transfers. When moving data out of the Amazon environment, we saw even greater speed improvement. Multi-threaded TCP based protocols improved speed based on the “shape” of the data. A single very large file moved quicker than lots of small files as the overhead to negotiate the transfer for each file had a significant impact on the transmission time.

Even with all these considerations, we found both methods improved speed of transfers by a significant amount.  A “raw” FTP transfer took 10 minutes to move a file from a domestic broadband connection, whereas the accelerated protocols took between 7 and 8 minutes.

 

In summary…

At the moment, it is too early to say if one protocol or solution will win out over the others, but the trend seems to be that UDP based products are slightly faster and therefore being more widely adopted by Managed File Transfer software vendors. A lot may depend on whether any vendor opens up their protocol to be incorporated into other applications.

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.

Webinar – Move Big Data up to 100 Times Faster

Webinar – Move Big Data up to 100 Times Faster

 

Event Type: Live Webinar
Event Date: Thursday 28th April 2016
Event Time: 10AM – 10.30AM

 

IBM Aspera Customers have experienced;
  • 20 x reduction in process times – a customer transfer was reduced from 10 hours to 26 minutes
  • 97% increase in network utilisation from 2% to 99%
  • 4x reduction in process turnaround – a media customer reduced their process turnaround from 6 to 1.5 days

So Join Pro2col and IBM for a short webinar to see how Aspera moves big data up to 100s of times faster.

This webinar will cover how IBM Aspera:
  • Globally moves data over standard wide area networks
  • Achieve multi-Gbps speeds over high performance global networks
  • Securely move and access big data – from ANY LOCATION, to ANYWHERE and with ANYONE
  • Provide enterprise-grade user access controls and encryption
  • Utilise ANY infrastructure combination – On Premises, AWS, Google, Azure, SoftLayer

Moving files, fast. Really fast!

Moving files, fast. Really fast!

FAST-FILES

Most of the companies that contact us do so looking for guidance on automating their file transfer. Automation is primarily for B2B and predominately involves using SFTP, a simple but typical example might be that an insurance company receives files from a bank and the files need to be moved to another internal system for processing. Of course there are usually a wide range of other complex processes that accompany it, but you get the idea.

Making files fast BIIn recent times however many of our conversation have gone along the lines of, “How can we move REALLY large files?” With the continuing explosion in data creation companies are having problems moving the large sets of data, many that are gigabytes in size.  All variations of FTP can deliver large volumes of data, but when you add in to the mix that it is in a far flung location and you’ve got a dodgy internet connection, suffering from latency and packet loss then FTP becomes pretty much ineffectual. Latency on the line, measured in round trip times (RTT), is a measure of how long it takes for a packet of data to get from point A to B and back again. We won’t go into the reasons in this blog, but suffice to say the longer the RTT the less efficient FTP becomes as can be seen in the graph here.

However all is not lost, there is an resolution to this data transfer problem. A select few vendors have built proprietary protocols based upon the open standard UDP to move data faster. A lot faster! Their protocols generally work the same way in that they maximise the utilisation of the bandwidth available to them by flooding the connection with data. Of course, controls are built in to ensure other network traffic doesn’t suffer. This approach can increase the speed by up to 1,000 times depending upon the network conditions and bandwidth available.

These UDP based solutions are now reaching a level of maturity enabling the software to be used in many scenarios, for example:

  • Disaster recovery and business continuity
  • Content distribution and collection, e.g., software or source code updates, or CDN scenarios
  • Continuous sync – near real time syncing for ‘active-active’ style HA
  • Supports master slave basic replication, but also more complex bi-directional sync and mesh scenarios
  • Person to person distribution of digital assets
  • Collaboration and exchange for geographically-distributed teams
  • File based review, approval and quality assurance workflows

Not only are files getting bigger but the environments within which they are implemented are evolving, the technologies with which they need to interface are changing too. Our expert team analyse complex requirements daily, providing companies of all shapes and sizes with solutions to their tricky file transfer conundrums. If your business needs to share large or sensitive data either in an automated of manual process, we can help. Contact one of our file transfer experts on +44 1202 433415 or get in touch via the web site.

Are these the three main types of b2b file transfer solutions?

Are these the three main types of b2b file transfer solutions?

File transfer requirements are diversifying at a rate of knots with more products available than I care to count, but for me there are three distinct types of file transfer solutions I believe the majority of the larger corporate and blue chip customers are interested in. These are;

Enterprise File Transfer – making use of email to deliver a message to the end user that provides them with instructions on how to download the files(s) with the added functionality of tracking and reporting.  This method is great for the ad-hoc user, requiring little to no training.

Managed File Transfer –  relates to the secure delivery of files, in many cases making use of secure FTP based protocols also providing additional functionality such as reporting and monitoring.  These solutions are generally embedded processes that are not seen by the users and underpin internal/external business processes.

Fast File Transfer – with businesses needing to shift large volumes of data over increasing distances across the WAN or Internet traditional delivery protocols such as FTP have been superseded with UDP based delivery solutions, which have the ability to send files significantly faster.  With the cost of Internet connectivity as it is, WAN acceleration technologies are becoming more frequently used to maximise the throughput over those connections.

I’d be interested to hear from anyone who has any suggestions for areas that we may have missed, specifically if you’re a vendor of the solution and are looking for representation in the UK.