Blog Archives - Managed File Transfer Solutions | Secure File Transfer Software | UK

Six Managed File Transfer Experts Give Their Top File Transfer Automation Tips

Six Managed File Transfer Experts Give Their Top File Transfer Automation Tips

One of the biggest driving factors behind businesses implementing Managed File Transfer (MFT) is to ‘improve productivity’ and one of the biggest ways in which MFT can improve productivity is through File Transfer Automation.

So we decided to ask six file transfer experts from Cleo, Coviant Software, Globalscape, Ipswitch, JSCAPE and South River Technologies for their ‘Top three file automation tips.’ Unsurprisingly our experts had diverse views on what they believed to be important about file transfer automation, but we didn’t quite expect the variety of answers we received.

Joe Dupree, VP Marketing, CleoJoe Dupree, VP Marketing, Cleo

“Manual workflow functions hold businesses hostage. MFT automation can make your organization faster and more nimble by:

Delivering at a greater scale: There is a growing need for file transfer solutions that address all types of data management and integration use cases. Automation eases deployment and operation. Even better, solutions that are agnostic to all data formats and transfer protocols provide more versatility. But in today’s business that’s not enough. Automated MFT needs to massively scale as more digitized business workflows require rapid movement of more data to parallel the movement of their business processes, team members, goods, and services.

Achieving high availability: Customers expect full uptime, but how do you get there? Can your solution automate failover, and can you implement it without a small army of consultants?

Onboarding faster: How long does it take to onboard one new trading partner? What if you need to add lots of new connections? Ditch the manual configuring with a solution that comes pre-loaded with 900 preconfigured connections to the world’s largest trading hubs. Choose a proven solution that streamlines operations by enabling faster onboarding.

Don’t get caught doing things manually while revenue awaits and productivity drains. An advanced and automated MFT solution enables greater operational efficiency.”

Pam Reid, CEO, Coviant Software

Pam Reid, CEO, Coviant Software

“When a file transfer automation application is deployed, business users may have the false expectation that file transfers will always be successful. File transfer automation does greatly reduce the number of file transfer errors, but it cannot eliminate them altogether. File transfer errors occur for many reasons – network outages, locked files, a new password or encryption key that was not updated in the file transfer automation application and many other issues.

So, my three top tips on how to live up to high business user expectations is to look for a solution that:

Reduces overall file transfer errors. When transient errors occur, such as a network outage or a locked file, jobs should automatically wait and retry the connection before failing the job.

Minimizes the time to diagnose and resolve errors. Look for a solution that sends all of the diagnostic information directly to an IT support email account, which enables IT support to identify and attempt to resolve the issue right away.

Gives IT support time to resolve errors before business users are notified.  Assume a weekly payroll file is supposed to be ready for pick-up at 2pm on Thursday and must be delivered by 5pm. If the file transfer job runs at 2pm and does not find the file, the job should notify only IT support that the file is not ready – which gives IT support several hours to resolve the problem before business users are notified of a failure.”

Matt Goulet, Senior VP, Sales & Marketing, Globalscape

Matt Goulet, CEO, Globalscape

“Automation is an effective way to improve file transfer efficiency. Manual, intermittent batch processes are an unreliable method of data delivery because of limitations such as lack of staff, turnover, and human error. Below are just a few ways you can streamline the file transfer process.

Program Workflows: Create programmatic workflows that can be used to trigger events and actions based on specific conditions. From being aware of multiple invalid logins to processing and scanning all incoming files, being able to program workflows saves time and minimizes errors.

Streamline Business Processes: Automated workflow tools that are used with managed file transfer solutions can help optimize business processes, even when sophisticated and complex workflows are needed. MFT automation tools help administrators perform complex tasks and remove the possibility of human error.

Process Files with Added Security Measures: Antivirus scanners and Data Loss Prevention tools can permit or prevent file transfers based on your policies and help keep your network free of infected files, and can help comply with regulatory standards by preventing personally identifiable information from being transferred.

If you’re ready to simplify your file transfer process, automate with an MFT solution.”

Kevin Conkin, VP of Product Marketing, Ipswitch

“Do you find yourself spending too much time on manual tasks related to file transfer such as tracking lost files, reworking custom scripts, or creating audit trails? Automated file transfer can make your job a lot easier to do, while improving the IT team’s positive impact on the business. Here are three ways automating file transfer helps IT teams:

IT productivity improves. Many file transfers are initiated on a recurring basis. IT teams can get bogged down confirming transfers to meet service level agreements. The automation that comes with a Managed File Transfer (MFT) solution promptly pushes data to the right person at the right time. This means that the IT team doesn’t have to think twice and can remain focused on other tasks.

IT makes users happy. Automated, managed file transfer will help you find a new way to work with your end users—to give them an easy-to-use solution that integrates seamlessly with what they’re already using.

IT becomes compliant. The automation that comes through Managed File Transfer can identify an audit trail, and at a moment’s notice, provide a real-time status of the transfer. As a result, MFT helps the organization become more compliant, especially when it is integrated with security controls such as encryption and data loss prevention technology.”

Van Glass, CEO, JSCAPE

Van Glass, CEO, JSCAPE

Simplify: Many legacy file transfer implementations are home grown solutions that have evolved into an ever growing number of disparate and undocumented scripts and processes. These scripts are often written in arcane programming languages and are difficult to maintain. This can be particularly troublesome in cases when the original authors leave the organization and this knowledge is lost. When automating file transfers it is important to try and centralize all file transfers under a single application that has a holistic view of all file transfer processes and does not require specialized programming knowledge.

Failures: It’s inevitable that some automated file transfers are going to fail. This can happen for a variety of reasons including invalid credentials, network connectivity issues or hardware failures. Regardless of the reason, it’s important that you be able to automatically respond to these failures in the most efficient way possible. Network connectivity issues are common but often short-lived, so rather than give up immediately, it is common practice to retry failed transfers with a certain wait period between each attempt. For example, you may configure your file transfers to retry up to 3 times with a wait period of 60 seconds between each retry. This gives the target server or network time to recover from the failure. Additionally, provided the protocol used supports it, retries should resume file transfers from the last byte successfully transferred instead of attempting to transfer the entire file in order to reduce bandwidth consumption and complete the transfer more quickly. This is especially important in very large file transfers of several GB or more. The ability to notify administrators via email or other means in the event of a hard failure is also key.

Track Results: Automating your file transfers is just one of the first steps in implementing a Managed File Transfer solution. Tracking the results of file transfers is equally important in that by doing so you gain improved visibility into your file transfers. This information will help you not only in troubleshooting failed file transfers but also in better understanding file transfer trends which can prove useful when scaling up hardware or network resources. The results of your file transfers should at a minimum be able to answer the following questions: Who initiated the file transfer? What is the path and size of the file transferred? Was the file transfer successful? When was the file transfer initiated and if successful when was it completed?”

Michael Ryan, CEO, South River Technologies

“For a Managed File Transfer implementation to be successful and truly improve productivity, it must consider both automated transfers and user initiated transfers.  Both aspects should be balanced, while also considering security implications.

Easy User Access.  Make sure that your Managed File Transfer solution makes it easy for users to access and collaborate on files.  If there are additional security steps that users have to perform, they may work around the security measures.  Security should be invisible to users, who need to focus on accomplishing their work.

Integration capability.  It’s important to consider solutions that will easily integrate with both existing corporate applications, as well as desktop user applications.  This extends your investment in existing technologies and reduces the training requirements for your MFT implementation.

Provide optimal performance.  Load balancing and automated failover are key requirements for performance and high availability, but also consider the server architecture.  Native 64-bit applications take full advantage of the resources of each individual server, which makes a load-balanced implementation even more effective in achieving high throughput and performance.”

In conclusion, we can see that our experts have differing opinions on what are important considerations for file transfer automation. All of them are very valid and I’d suggest that you take into account as many as possible when considering a Managed File Transfer solution.

Of course, there are many solutions in the marketplace, all with their slight differences in features, benefits and total cost of ownership. If you’re considering implementing Managed File Transfer software to provide file transfer automation, or reviewing what you already have in place, then we’d be happy to help. You can speak to one of our friendly MFT experts by calling 0333 123 1240 or get in touch here.

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.

Accelerated File Transfer – Extreme Speeds Ahead

Accelerated File Transfer – Extreme Speeds Ahead

If you’re looking to move a large amount of data, a Managed File Transfer (MFT) solution can help with the automation, but, for the main part, it would still transmit the file at the same rate as a traditional FTP client.

There are some delivery protocols now being incorporated into MFT solutions, which will significantly increase the speed of transmission. Unfortunately, there are no open standard protocols for high speed transmission so, at the moment your options are going to tie you to one vendor or another. Typically, a dedicated client is also required, and some of these are not easy to integrate into an automation process.

Software vendors have largely looked to resolve the fast file transfer problem using two differing approaches. File transfer protocol development has either been multi-threaded TCP streams or an expansion of the open source UDP/UDT projects.

Transmission Control Protocol (TCP)

TCP is the underlying network communication protocol used in all standard MFT protocols. Fundamentally, a file or message is spilt up into small packets, which are numbered, checksummed and then sent to a remote server. At the other end of the communication channel the packet is stripped of its header information and then checksummed again. If the checksums match, an acknowledgement packet is returned to the sender to confirm successful receipt of the packet. If the checksums do not match, it is considered that the packet has been corrupted in transit and a non-acknowledgement is sent with a request to send the packet again. From the sender’s end, if no acknowledgement packet is received within a specified time, then the packet is assumed to have been lost and is sent again.

With poor quality networks, it is possible for the original packet to be received intact but the acknowledgement packet to be either corrupted or lost in transmission. This would cause the whole packet to be resent unnecessarily, having an impact on transmission times. In addition, the way FTP works to send files, not all bandwidth is used up early in the cycle, and the rate of transmission is dictated by how much data gets through until failures start to occur.

 

User Datagram Protocol (UDP)

UDP based fast transfer protocols are becoming more common with at least three major vendors incorporating them into their MFT solutions. These work in a similar way to traditional TCP based transfers, but do not rely on the acknowledgement packet to be received, they assume that the packet arrived in tact. A TCP control channel is kept open where packet retry requests can be sent if there is an issue with a packet.

The UDP based transfers are much more efficient over long distances and poor quality networks. For example, some of the major internet video services, such as Netflix or LoveFilm, embed them into their software to deliver video to customers’ homes effectively.

A little bit of testing…

During software evaluation testing we moved GBs of data from one Amazon data centre in the US to one in Europe, and saw a 40% increase in the speed of transmission when using UDP based transfers. When moving data out of the Amazon environment, we saw even greater speed improvement. Multi-threaded TCP based protocols improved speed based on the “shape” of the data. A single very large file moved quicker than lots of small files as the overhead to negotiate the transfer for each file had a significant impact on the transmission time.

Even with all these considerations, we found both methods improved speed of transfers by a significant amount.  A “raw” FTP transfer took 10 minutes to move a file from a domestic broadband connection, whereas the accelerated protocols took between 7 and 8 minutes.

 

In summary…

At the moment, it is too early to say if one protocol or solution will win out over the others, but the trend seems to be that UDP based products are slightly faster and therefore being more widely adopted by Managed File Transfer software vendors. A lot may depend on whether any vendor opens up their protocol to be incorporated into other applications.

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.

What An Amazing Year, All Thanks To…

What An Amazing Year, All Thanks To…

I’m breaking cover in this week’s blog post. Usually, you’d get some useful nugget of information or technical tip about Managed File Transfer, however this week as our financial year draws to a close, I want to put on record my thanks to an outstanding year by the whole team at Pro2col.

The expanding team has delivered significant growth, over 55% in each of the past two years, culminating in an impressive year end. Three areas have contributed substantially to the growth; organisations have realised that highly available file transfer systems are critical, moving data to and from the cloud isn’t straight forward and that they need to secure their data exchanges ahead of GDPR, which is just 14 months away!

In this past year, we’ve added some stellar logo’s to our already impressive list of clients, delivering projects for KPMG, Royal Sun Alliance, Williams Lea, Hitachi Rail, United Utilities & Skanska, to name just a few. None of this would have been possible without the expertise of the dedicated team, we’ve developed at Pro2col over the past few years.

We’re in the process of adding to our portfolio with some exciting new products, to better address the secure messaging and enterprise sync and share space. Our objective is to provide the finest portfolio of secure data exchange, collaboration and messaging solutions to UK businesses, with the technical expertise our customers need to ensure smooth delivery and wide end-user adoption.

Further progress is being made on a number of other fronts which include ISO 9001 accreditation, which we’ve nearly completed. In addition, we’ve recently added an experienced Channel/Vendor Manager to help the increasing number of Value Added Resellers and System Integrators that regularly call upon our expertise.

The future looks extremely bright at Pro2col, and I want to put on record my thanks to the team for the excellent work they’ve put in this year. I’m really proud of the progress we’ve made and look forward to the rest of 2017 with great anticipation.

James Lewis
Managing Director
Pro2col

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.

Disaster Recovery in Managed File Transfer

Disaster Recovery in Managed File Transfer

There is an increasing reliance on using high availability to protect Managed File Transfer (MFT) systems, and indeed most MFT vendors provide a robust solution, often offering both Active-Active or Active-Passive configurations.  There are however many circumstances where using high availability is simply not an option, whether due to cost, infrastructure or some other reason.  In this case it is necessary to revert to the ‘old school’ way of doing things using some form of backup-restore mechanism to offer disaster recovery.

Just to be clear to those who have a different understanding to me regarding the difference between high availability and disaster recovery, this article is based upon the following understanding:

In high availability there is little or no disruption of service; after part of the infrastructure fails or is removed, the rest of the infrastructure continues as before.

In disaster recovery, the service is recovered to a new, cold or standby instance, either by automatic or manual actions.  This includes VM snapshots and restoring to a non-production environment.

Planning Ahead

It goes without saying that disaster recovery isn’t something that you can easily achieve on the fly; it’s important to have detailed plans and practice them regularly until recovery practices always complete flawlessly.  When you start planning for disaster recovery the very first question should be “What should my recovery environment look like?”

This might sound like a strange way to start, but just take a moment to consider why you have a Managed File Transfer system in the first place.  Do you need the data stored in it to be available following the recovery or just the folder structure?  It’s a best practice policy not to leave data in the MFT system for long periods of time – it should contain transient data, with an authoritative source secured elsewhere.  If you continue with this train of thought, think about how valid the content of any backup would be if (for example) it is only taken once per day.  Potentially that could mean 23 hours and 59 minutes since the previous backup; a lot can change in that time.

Similarly, consider that you may have another system sending data into MFT on a frequent basis; if that system needs to be recovered (due perhaps to a site outage), then you will need to find a common point in time that you are able to recover to, or risk duplicate files being sent following recovery activities (see RPO below)

Should your recovery environment be similarly sized to the production environment?  Ideally, the answer is always going to be yes, but what if your production system is under used, or sized to take into account periodic activities?  In that case, a smaller less powerful environment may be used.

RTO and RPO

Recovery Time Objective (RTO) and Recovery Point Objective (RPO) are the two most critical points of any disaster recovery plan.  RTO is the length of time that it will take to recover your MFT system; RPO is the point in time that you will set your MFT system back to – frequently this is the last successful backup.  As already mentioned, you may need to synchronise the RPO with other independent systems.  Once you have decided upon the RPO, you need to plan how you will handle transfers which may have occurred since that time; will they be resent, or do you need to determine which files must not be resent?  Will you need to request inbound files to be sent again?

You can decide on RTO time only by executing a recovery test.  This will enable you to accurately gauge the amount of time the restore process takes; remember that some activities may be executed in parallel, assuming available resources.

Hosting of MFT systems on Virtual Machine (VM) farms has changed the way that we consider RPO and RTO somewhat.  In general, VMware allows us several possibilities for recovery, including:

  • A system snapshot taken periodically and shipped to the recovery site
  • Replication of the volume containing the VM (and generally several other VMs)
  • Hyper-V shared cluster environment

Of these, Hyper-V probably comes closest to being a high availability alternative, however it should be remember that it uses asynchronous replication of the VM; this means that there is a loss of data or transaction, albeit a small one.

The Recovery

Let’s assume that you’ve decided to avoid the RPO question by presenting an ‘empty’ system in your recovery site.  This means that you will need to periodically export your production configuration and ship it to the recovery site.  Ideally, you would want to do this at least daily, but possibly more frequently if you have a lot of changes.  Some MFT systems allow you to export and ship the configuration using the MFT product itself – this is a neat, self-contained method that should be used if it’s available.  In this way you are more likely to be sure to have the very latest copy of the configuration prior to the server becoming unavailable.

The actual MFT software may or may not be installed in advance, depending upon your licence agreement (some vendors permit this, others not – be sure to check as part of your planning).  In any event, it is best to keep the product installation executable(s) on the server in case they are required.

So next on the list of things to think about is; what else do you need to complete the recovery?  Unfortunately, the answer can be quite long:

  • DNS
    Can the new server have the same IP address as the old?  Do you need to add it?  The new server may well be on a completely different subnet.
    If you are using DNS C records to reach the server, where are they updated?
    Is there a load balancer to be updated?
    Does the recovery server have the same firewall rules as the production server?
    Are you using a forward proxy to send traffic out of the network, and if so will it present the same source IP address?
    If you have multiple sites defined in your MFT system, does each have a unique IP address?
  • Keys and Certificates
    Are these included as part of the system configuration or do they have to be handled separately?
    Are PGP key-rings held in the home directory of the account that the MFT systems runs under?
  • User Accounts
    Does your configuration export include locally defined users?  Do you make use of local groups on the server which may not be present on the recovery server?
    Will LDAP/LDAPS queries work equally well from this machine?

Returning to Normal Operations

Sooner or later you will need to switch back operations to the normal production environment.  Unfortunately, this isn’t always as straightforward as you could wish for.

When disaster struck and you initiated your disaster recovery plan, you were forced into it by circumstances.  It was safe to assume that data was lost and the important thing was to get the system back again.  Now however your recovery environment may have been running for days and it will probably have seen a number of transfers.  At this point you need to ‘drain’ your system of active transfers and potentially identify files which have been uploaded into your MFT but have not yet been downloaded.

Some MFT systems keep track of which files have been transferred (to avoid double sending); if your MFT system is one of these, then you will need to ensure that the production system knows which files the recovery system has already handled.  Regardless of this, you will need to ship your configuration back to the production server in order to accommodate any changes that have occurred – for example, users being created or deleted, or even simply changing their password.

Synchronise the return with other applications that send data through the MFT system in order to avoid data bottlenecks from forming during the move; remember that any DNS changes you need to make at this time may take some time to be replicated through the network.

Keeping the Recovery System Up To Date

Of course, any time you make an update to your production environment, it could easily invalidate your recovery environment.  An example might be something as simple as resizing a disk, or adding a new IP address – both of these activities should hopefully be covered by change management practices, but of course we all know that replication of changes into the recovery environment doesn’t always happen.  This is why it’s so important to perform regular disaster recovery exercises every six months or so, so that you can identify and resolve these discrepancies before a disaster occurs.  When considering what changes need to be replicated, look again at the areas you need to consider when first setting up the recovery environment.

Ramifications of Not Being Disaster Ready

For many organisations, their MFT system is far more important than people realise.  An unplanned outage will prevent goods from being shipped, orders placed and payments being sent, which obviously has a negative impact at not only a financial level, but also in other intangible levels that aren’t so easily quantifiable.  How likely are customers to use your services in the future if they can’t rely on their availability now?

There’s also the certification aspect to consider.  If you are considering ISO 27001 certification, you need to have a realistic plan in place, test and maintain it – neglecting this will result in an audit failure and potential loss of certification if it has already been delivered.

Finally, the most important thing to do is document EVERYTHING.  Every step should be able to be followed by someone without specific knowledge of the system.  Every change should be recorded, every test detailed, regardless of success or failure.

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.

Should You Password Protect Links to Your Files?

Should You Password Protect Links to Your Files?

As file transfer specialists, we speak to customers every day who are trying to strike the balance between ease of use and security.  Internal users want to be able to share a file quickly and simply.  They don’t want to ask the recipient, who may be faced with a myriad of different systems to go through complex authentication processes.  All end users want the simplicity of the cloud systems they use at home, which are great for sharing your holiday photos.

Although a ruling in the US courts last month may have just swung the balance in favour of a more secure approach.

Video footage related to a court case was uploaded and a non-password protected link was shared between the firm, its parent company and the investigating team.  At a later stage, further legal files were added to the same folder.  The link was included in the police files and was then forwarded to the opposing legal firm.  They were then able to download all of the files before the court case.

The judge ruled that the company had waived any claim of privilege to materials as they were accessible to anyone who had the hyperlink. “In essence, the defendant conceded that its actions were the cyber world equivalent of leaving its claims file on a bench in the public square and telling its counsel where they could find it. It is hard to imagine an act that would be more contrary to protecting the confidentiality of information than to post that information to the world wide web.”

This ruling should make us all think twice before putting confidential documents in a file-sharing site without password protection, especially when there are so many secure alternatives available.

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.

Metadata in the Managed File Transfer Space

Metadata in the Managed File Transfer Space

One of the limitations of using any file transfer protocol is describing the file that is being transferred.  In early iterations of many (though not all) solutions, this was not even a consideration – if you needed to add some information, you included a header, or possibly just another file to describe the first.  This was (and still is) very cumbersome, requiring a file to be opened just to determine its content.

Zenodotus

Someone who was way ahead of the game on this issue was the ancient scholar and literary critic Zenodotus, who at around 280BC was the first librarian of Alexandria.  Zenodotus organised the library by subject matter and author, but more importantly to this blog, attached a small tag to each scroll describing the content, title, subject and author.  This approach meant that scholars no longer had to unroll scrolls to see what they contained, and is the first recorded use of metadata.

In IT terms, metadata came in to play in the 1970s, as an alternative method to locating data when designing databases, but it really became established as an integral part of data manipulation when XML became popular for web services.

Metadata in MFT

In terms of Managed File Transfer (MFT), if we consider a file being transferred as analogous to a scroll, we might use the metadata ‘tag’ to record things about the file – the person sending it, its content, its final recipient and perhaps a checksum hash.  The possibilities for use are endless and we very quickly get to a point of wondering how we ever got by without it.

But before you start googling how to add metadata in a traditional transfer, you should be aware that the only metadata you are likely to be able to successfully access by FTP or SFTP are the filename and creation date (occasionally permissions or ownership too, depending upon the system).  Obviously, this isn’t too useful when describing the data – what’s required is a little help from the file transfer vendors.  This is normally delivered to end users via a webform – a HTML based form field containing several input fields completed at upload time – or via some form of API.  The metadata is then stored either in XML files, or more commonly a database, from where it can be related to the files and queried as required.

What do I do with the Metadata?

Generating a webform for uploading metadata in a Managed File Transfer system is actually quite simple – the challenge comes later when trying to maintain the relationship to the file; for example, will your automation engine be able to (a) read the metadata, and (b) act upon the metadata to determine what to do with the file.  It is quite straightforward to plan, but unfortunately not so simple to implement.

Some vendors have a slightly more advanced workflow methodology than others – if webforms and metadata are necessary for your environment, then it may be worthwhile looking at out-of-the-box solutions, rather than coding your own.  The challenges around building, securing and maintaining a webform and workflow combination frequently outweigh the costs of such a system.  Without doubt however, all the major MFT vendors provide some form of webform integration to one extent or another.  Metadata is here to stay in the world of MFT, but at the time of writing this there is no industry standard, clear winner or even preference in direction.

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.