Managed File Transfer Archives - Managed File Transfer Solutions | Secure File Transfer Software | UK

Disaster Recovery in Managed File Transfer

Disaster Recovery in Managed File Transfer

There is an increasing reliance on using high availability to protect Managed File Transfer (MFT) systems, and indeed most MFT vendors provide a robust solution, often offering both Active-Active or Active-Passive configurations.  There are however many circumstances where using high availability is simply not an option, whether due to cost, infrastructure or some other reason.  In this case it is necessary to revert to the ‘old school’ way of doing things using some form of backup-restore mechanism to offer disaster recovery.

Just to be clear to those who have a different understanding to me regarding the difference between high availability and disaster recovery, this article is based upon the following understanding:

In high availability there is little or no disruption of service; after part of the infrastructure fails or is removed, the rest of the infrastructure continues as before.

In disaster recovery, the service is recovered to a new, cold or standby instance, either by automatic or manual actions.  This includes VM snapshots and restoring to a non-production environment.

Planning Ahead

It goes without saying that disaster recovery isn’t something that you can easily achieve on the fly; it’s important to have detailed plans and practice them regularly until recovery practices always complete flawlessly.  When you start planning for disaster recovery the very first question should be “What should my recovery environment look like?”

This might sound like a strange way to start, but just take a moment to consider why you have a Managed File Transfer system in the first place.  Do you need the data stored in it to be available following the recovery or just the folder structure?  It’s a best practice policy not to leave data in the MFT system for long periods of time – it should contain transient data, with an authoritative source secured elsewhere.  If you continue with this train of thought, think about how valid the content of any backup would be if (for example) it is only taken once per day.  Potentially that could mean 23 hours and 59 minutes since the previous backup; a lot can change in that time.

Similarly, consider that you may have another system sending data into MFT on a frequent basis; if that system needs to be recovered (due perhaps to a site outage), then you will need to find a common point in time that you are able to recover to, or risk duplicate files being sent following recovery activities (see RPO below)

Should your recovery environment be similarly sized to the production environment?  Ideally, the answer is always going to be yes, but what if your production system is under used, or sized to take into account periodic activities?  In that case, a smaller less powerful environment may be used.

RTO and RPO

Recovery Time Objective (RTO) and Recovery Point Objective (RPO) are the two most critical points of any disaster recovery plan.  RTO is the length of time that it will take to recover your MFT system; RPO is the point in time that you will set your MFT system back to – frequently this is the last successful backup.  As already mentioned, you may need to synchronise the RPO with other independent systems.  Once you have decided upon the RPO, you need to plan how you will handle transfers which may have occurred since that time; will they be resent, or do you need to determine which files must not be resent?  Will you need to request inbound files to be sent again?

You can decide on RTO time only by executing a recovery test.  This will enable you to accurately gauge the amount of time the restore process takes; remember that some activities may be executed in parallel, assuming available resources.

Hosting of MFT systems on Virtual Machine (VM) farms has changed the way that we consider RPO and RTO somewhat.  In general, VMware allows us several possibilities for recovery, including:

  • A system snapshot taken periodically and shipped to the recovery site
  • Replication of the volume containing the VM (and generally several other VMs)
  • Hyper-V shared cluster environment

Of these, Hyper-V probably comes closest to being a high availability alternative, however it should be remember that it uses asynchronous replication of the VM; this means that there is a loss of data or transaction, albeit a small one.

The Recovery

Let’s assume that you’ve decided to avoid the RPO question by presenting an ‘empty’ system in your recovery site.  This means that you will need to periodically export your production configuration and ship it to the recovery site.  Ideally, you would want to do this at least daily, but possibly more frequently if you have a lot of changes.  Some MFT systems allow you to export and ship the configuration using the MFT product itself – this is a neat, self-contained method that should be used if it’s available.  In this way you are more likely to be sure to have the very latest copy of the configuration prior to the server becoming unavailable.

The actual MFT software may or may not be installed in advance, depending upon your licence agreement (some vendors permit this, others not – be sure to check as part of your planning).  In any event, it is best to keep the product installation executable(s) on the server in case they are required.

So next on the list of things to think about is; what else do you need to complete the recovery?  Unfortunately, the answer can be quite long:

  • DNS
    Can the new server have the same IP address as the old?  Do you need to add it?  The new server may well be on a completely different subnet.
    If you are using DNS C records to reach the server, where are they updated?
    Is there a load balancer to be updated?
    Does the recovery server have the same firewall rules as the production server?
    Are you using a forward proxy to send traffic out of the network, and if so will it present the same source IP address?
    If you have multiple sites defined in your MFT system, does each have a unique IP address?
  • Keys and Certificates
    Are these included as part of the system configuration or do they have to be handled separately?
    Are PGP key-rings held in the home directory of the account that the MFT systems runs under?
  • User Accounts
    Does your configuration export include locally defined users?  Do you make use of local groups on the server which may not be present on the recovery server?
    Will LDAP/LDAPS queries work equally well from this machine?

Returning to Normal Operations

Sooner or later you will need to switch back operations to the normal production environment.  Unfortunately, this isn’t always as straightforward as you could wish for.

When disaster struck and you initiated your disaster recovery plan, you were forced into it by circumstances.  It was safe to assume that data was lost and the important thing was to get the system back again.  Now however your recovery environment may have been running for days and it will probably have seen a number of transfers.  At this point you need to ‘drain’ your system of active transfers and potentially identify files which have been uploaded into your MFT but have not yet been downloaded.

Some MFT systems keep track of which files have been transferred (to avoid double sending); if your MFT system is one of these, then you will need to ensure that the production system knows which files the recovery system has already handled.  Regardless of this, you will need to ship your configuration back to the production server in order to accommodate any changes that have occurred – for example, users being created or deleted, or even simply changing their password.

Synchronise the return with other applications that send data through the MFT system in order to avoid data bottlenecks from forming during the move; remember that any DNS changes you need to make at this time may take some time to be replicated through the network.

Keeping the Recovery System Up To Date

Of course, any time you make an update to your production environment, it could easily invalidate your recovery environment.  An example might be something as simple as resizing a disk, or adding a new IP address – both of these activities should hopefully be covered by change management practices, but of course we all know that replication of changes into the recovery environment doesn’t always happen.  This is why it’s so important to perform regular disaster recovery exercises every six months or so, so that you can identify and resolve these discrepancies before a disaster occurs.  When considering what changes need to be replicated, look again at the areas you need to consider when first setting up the recovery environment.

Ramifications of Not Being Disaster Ready

For many organisations, their MFT system is far more important than people realise.  An unplanned outage will prevent goods from being shipped, orders placed and payments being sent, which obviously has a negative impact at not only a financial level, but also in other intangible levels that aren’t so easily quantifiable.  How likely are customers to use your services in the future if they can’t rely on their availability now?

There’s also the certification aspect to consider.  If you are considering ISO 27001 certification, you need to have a realistic plan in place, test and maintain it – neglecting this will result in an audit failure and potential loss of certification if it has already been delivered.

Finally, the most important thing to do is document EVERYTHING.  Every step should be able to be followed by someone without specific knowledge of the system.  Every change should be recorded, every test detailed, regardless of success or failure.

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.

Metadata in the Managed File Transfer Space

Metadata in the Managed File Transfer Space

One of the limitations of using any file transfer protocol is describing the file that is being transferred.  In early iterations of many (though not all) solutions, this was not even a consideration – if you needed to add some information, you included a header, or possibly just another file to describe the first.  This was (and still is) very cumbersome, requiring a file to be opened just to determine its content.

Zenodotus

Someone who was way ahead of the game on this issue was the ancient scholar and literary critic Zenodotus, who at around 280BC was the first librarian of Alexandria.  Zenodotus organised the library by subject matter and author, but more importantly to this blog, attached a small tag to each scroll describing the content, title, subject and author.  This approach meant that scholars no longer had to unroll scrolls to see what they contained, and is the first recorded use of metadata.

In IT terms, metadata came in to play in the 1970s, as an alternative method to locating data when designing databases, but it really became established as an integral part of data manipulation when XML became popular for web services.

Metadata in MFT

In terms of Managed File Transfer (MFT), if we consider a file being transferred as analogous to a scroll, we might use the metadata ‘tag’ to record things about the file – the person sending it, its content, its final recipient and perhaps a checksum hash.  The possibilities for use are endless and we very quickly get to a point of wondering how we ever got by without it.

But before you start googling how to add metadata in a traditional transfer, you should be aware that the only metadata you are likely to be able to successfully access by FTP or SFTP are the filename and creation date (occasionally permissions or ownership too, depending upon the system).  Obviously, this isn’t too useful when describing the data – what’s required is a little help from the file transfer vendors.  This is normally delivered to end users via a webform – a HTML based form field containing several input fields completed at upload time – or via some form of API.  The metadata is then stored either in XML files, or more commonly a database, from where it can be related to the files and queried as required.

What do I do with the Metadata?

Generating a webform for uploading metadata in a Managed File Transfer system is actually quite simple – the challenge comes later when trying to maintain the relationship to the file; for example, will your automation engine be able to (a) read the metadata, and (b) act upon the metadata to determine what to do with the file.  It is quite straightforward to plan, but unfortunately not so simple to implement.

Some vendors have a slightly more advanced workflow methodology than others – if webforms and metadata are necessary for your environment, then it may be worthwhile looking at out-of-the-box solutions, rather than coding your own.  The challenges around building, securing and maintaining a webform and workflow combination frequently outweigh the costs of such a system.  Without doubt however, all the major MFT vendors provide some form of webform integration to one extent or another.  Metadata is here to stay in the world of MFT, but at the time of writing this there is no industry standard, clear winner or even preference in direction.

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.

What is Managed File Transfer?

What is Managed File Transfer?

 

So, what is Managed File Transfer and why is it important?

In the digital economy, more than a third of all business-critical processes involve file transfers. Managed File Transfer is a solution to enable the secure transfer of data between two or more locations, connected via a network. It is typically delivered as on-site software but can also be also offered as a cloud-service.

Managed File Transfer offers a comprehensive set of features aimed at replacing: insecure, legacy FTP servers, home-grown, bespoke file transfer solutions, physical shipment of media (e.g. USB devices, DVD’s & HDD’s), consumer grade cloud based services, large email attachments or the installation of expensive point to point leased lines and VANs.

Managed File Transfer can be a powerful business enabler that reduces costs and risk, improves efficiency and agility, and opens the door to new mobile, cloud and big data initiatives.

Managed File Transfer

A recent study by Aberdeen Group* showed that 65% of businesses that implemented Managed File Transfer, did so to improve productivity. Managed File Transfer solutions typically have a range of features, which enhance productivity whilst also improving security. For example they:

  • Centralise support for multiple file transfer protocols including FTP/S, OFTP, SFTP, SCP, AS2, and HTTP/S.
  • Encrypt files throughout the file transfer process (in transfer, at rest even through automation), is managed centrally via a simple to use software interface.
  • Automate file transfer processes with trading partners, from payroll to planning.
  • Detect and handle failed file transfers including notification and initiating remedial action.
  • Authenticate users against existing user repositories such as LDAP and Active Directory
  • Integrate to existing applications using documented APIs (application programming interfaces) to reduce the costs and risks of manual intervention
  • Generate detailed reports on user and file transfer activity to highlight any areas requiring further improvement, or training.
  • Monitoring and dashboards provides a real time view of your business critical workflows.

Gartner* highlights that the two key differences between standard FTP servers and those features provided by Managed File Transfer are the ability to “manage” and the ability to “monitor”. Gartner defines these as:

  • Manage – means to manage all file transfers using one interface (one place) across all business units, operations, systems, applications, partners, etc.
  • Monitor – means to monitor all file transfers in one centralised location which in turn means better governance, compliance and reduced IT costs.

Managed File Transfer projects can range in size, scale and scope, addressing tactical problems to delivering a strategic solution to manage all of a company’s data transfer requirements.

If you think your organisation could benefit from a Managed File Transfer solution, we have a range of resources to help get your project started. Why not download our Managed File Transfer comparison guide or look at our needs assessment options.

 

References*
From Chaos to Control, Aberdeen Group study, November 2013
Managed File Transfer Offers Solutions for Governance Needs, Gartner 2010

 

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.

Protecting Your Data At Rest – What Are Your Options?

Protecting Your Data At Rest – What Are Your Options?

Modern Managed File Transfer (MFT) solutions provide several ways to protect data. In addition to using secure protocols for data in transit and the protection against DDos, Hammering and Brute Force attacks, many solutions provide mechanisms for securing files at rest, while they are awaiting collection or processing.

Protecting data at rest

Protecting the files at rest can be achieved in several ways, with the most common being:

  • Writing to an Encrypted File store
  • Encrypting Data using PGP or similar
  • Securing them in another network segment

Encrypted File store

Encrypted file stores leverage either native encryption technology such as EFS, or use their own encryption methods to secure files stored in the data area of the MFT solution. Files are encrypted before they are stored so there is no requirement to manage keys.  Decrypting the data is also done on the fly, when a file is downloaded through the software. Browsing to the storage location from the operating system may show either the real file names or an anonymised series of files. The downside to this method is that data written to a windows share is not accessible to other applications except via the solution ie via an API.

If your MFT solution does not support encryption at rest natively, then there are several network storage devices which can present encrypted storage as a normal CIFS share. Using this as storage for your MFT solution will protect your data from physical theft but may not protect from access by internal users or systems.  Not all MFT solutions can be integrated with this type of encrypted storage device.

Use PGP

Another popular method is to secure the data using PGP. PGP gives you the option of encrypting a file outside of the MFT solution for full end to end security.  Alternatively, most MFT solutions support PGP encryption and decryption for incoming and outgoing files.  PGP encryption applied by the MFT system is triggered once a file has been successfully uploaded.  Once the file is PGP encrypted, it can be sent over to a remote system where it will need to be decrypted. While this process has many positives, not all MFT solutions support PGP encryption on the fly.  The MFT solution must wait for the file to be uploaded and stored unencrypted, before it attempts to PGP encrypt it. This means that there is a short period of time where the file will sit on the storage in an unencrypted state and only once the encryption process has completed successfully will the unencrypted version of the file be deleted. As this whole encryption process only takes a few seconds, the exposure of the data unencrypted is minimal and many organisations are happy with the risk of temporarily unencrypted data.

Network Segments

An alternative approach to protecting your data at rest is to use the forward/reverse proxy capabilities for MFT solutions. This adds an extra layer of defence to your MFT system’s security. As no data is stored on these proxies, any external attack that managed to compromise the proxy server, would not be able to access any data as it is safely stored on the main MFT server behind another firewall. Just like the encrypted file stores, these gateways are completely transparent to the end users.

Each of these measures help protect data at rest and they can all be combined to give a high level of protection. These methods can assist in meeting regulatory compliance such as PCI DSS, ISO 27001, etc.

GDPR

ONLY 15 MONTHS TO GO!! – Are you ready?

Managed File Transfer Comparison Guide

Managed File Transfer Comparison Guide

[Updated – 2017]

Our independently researched Managed File Transfer comparison guide has been updated for 2017. Our third edition reviews eight of the leading Managed File Transfer (MFT) solutions.   We’ve created this guide to enable businesses of all shapes and sizes to review the solutions side by side to speed up the selection process.

This version of the guide includes a section on cloud connectors for the first time. As businesses rapidly adopt cloud services, this has become an increasingly common discussion point.

It isn’t a definitive analysis of all the available MFT solutions, but it does include the most cost effective, popular and feature rich products in one place for you to review. The guide is split into six sections and provides an insight into the main questions that we’re asked on a daily basis.

  • Solution Basics – these are the key questions that more or less everyone asks us, when looking for a Managed File Transfer solution.
  • Business Strategy – this section prompts you to consider how your solution will be impacted by other policies within the business.
  • Technical Details – looks at some of the key features of Managed File Transfer solutions at a more granular level.
  • Automation Options – lists the most commonly required automation features, a key component of any Managed File Transfer solution.
  • Transfer Protocols – a review of eleven of the most widely used file transfer delivery protocols.
  • Cloud Connectors – a key differentiator at this point, we list eight of the most common cloud services that you’re likely to need to connect to.

The guide covers the most frequently asked questions, but naturally we can only include so much detail.  By the end of this document you should however, have a clearer view of what specific features you need from your Managed File Transfer solution and which vendors are a good fit.

Once you’ve reviewed the comparison guide, I encourage you to review our other free resources. These are our Expert Guide to Managed File Transfer and the Managed File Transfer Needs Analysis.

It’s highly likely that you’ll have many more questions and our team of pre-sales and technical consultants are perfectly placed to provide you with further product information, demonstrations, and software evaluations. Additionally we’ll help you to build a stronger business case, comparing multiple similar solutions, highlighting how a solution might cater for future growth, providing cost comparisons and guidance on calculating ROI.

We look forward to helping you with your Managed File Transfer project in the near future!

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.

The Advantages of Using a Forward and Reverse Proxy

The Advantages of Using a Forward and Reverse Proxy


There are many free ways to implement file transfer in an organisation, from using inbuilt FTP daemons on a Unix server, to installing Microsoft IIS or similar and even trying an open source Managed File Transfer (MFT) product.

What these products have in common is that connections are passed directly through to the server. If the server is sited in a DMZ, then connections pass over the external firewall, but all the data and account credentials are stored in the DMZ. Alternatively, if the server is located in the secure “internal” network zone, firewall ports would need to be opened up directly from the internet into this network zone which may violate internal security policies.

Modern MFT solutions approach this problem in one of two ways. Some products are designed to sit inside the DMZ and encrypt data at rest, while storing account credentials in an encrypted database. Firewall rules between the DMZ and internal network are not required except for collection of the data.

The other which is by far the most popular way, is to use an additional server sited in the DMZ as a forward/reverse proxy server.

A proxy server based in the DMZ, acts as a front end to the MFT solution. Connections are terminated at the proxy and passed back to the MFT server located in the internal network using another/proprietary port. The proxy itself does not store any data or account information but instead acts as an intermediary between the MFT server and the connecting client. This means that if the proxy server were to be compromised by malicious software, no sensitive data is at risk and the attack cannot get any deeper into the network.

Outbound connections from a MFT solution located inside the secure network can also be routed though a proxy. This means that just a single port needs to be opened between the MFT server and the proxy located in the DMZ. For added protection, in most cases this connection is “outbound only” and needs to originate from the MFT server before the proxy responds to any connection attempts. From the proxy out to the internet standard ports can then be used making firewall configurations more straightforward for the network team to configure.

If you implement a proxy server there are also a few added benefits which may not be immediately obvious.

Forward Proxies are useful for performing NATing.

Upgrading key solutions like MFT can be a disruptive process and it is not uncommon for Pro2col to come across MFT servers which have not been upgraded for over 5 years as a direct result of the impact and downtime upgrading would have. If a server is using a proxy server, then a new MFT server can be installed next to the out of date MFT server, and at switch over, connect to the proxy server as soon as the old server is taken down. External users and connections see no difference in how they are connecting and downtime appears, from the external connections point of view, to be a few seconds. As a result, upgrade disruption is kept to a minimum, maintenance windows can be scheduled more regularly.

Many organisations have a security policy of data not being stored in the DMZ and using a proxy server can enable the MFT server to stay in the secure part of your network without routing internet traffic though the DMZ to the server.

PCI DSS regulations amongst others stipulate that credit card data cannot be stored in the DMZ, even if it is encrypted. Using a proxy plus the reporting features of MFT enables compliance.

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.