Managed File Transfer Archives - Managed File Transfer Solutions | Secure File Transfer Software | UK

Password Security in Managed File Transfer

Password Security in Managed File Transfer

Last week was “World Password Day”, a day designed to get people thinking about password security and hopefully change their passwords. I was surprised to see an article from Sophos that the average person has 19 passwords to remember and almost a third struggle with strong passwords.  With the raft of work systems, private emails, social media, online shopping and banking passwords I thought it would be many more. I did a quick tally of my online passwords and worked out I have in excess of 30 passwords, although most of the private account passwords are variations on 4 main passwords.  I worked for one very large organisation who insisted passwords were changed every month but suggested that you simply add the month digit to the end of your password, negating the password security almost entirely.

The full article from Sophos can be found here.

Having strong passwords and authentication methods for file transfer accounts is very important. There are several approaches for user authentication that are supported by most Managed File Transfer (MFT) solutions.

These are

  • Application Controlled
  • External source (AD / LDAP / Other source)
  • Advanced Authentication using RADIUS or a One Time Password system
  • Private key authentication

With application controlled authentication, the MFT solution will control the length, complexity, password history and password expiry using internal systems. Usually users will be prompted to change their passwords either by getting an email, or when they login.

This works well but, for users inside the organisation, passwords can drift out of sync and this can lead to increased issues as users are asked to remember more and more passwords to access different systems. In this case, we usually recommend that the MFT solution uses the internal Active Directory or LDAP source. This allows the user to use the same credentials that they login to their computers with. Responsibility for changing the password then resides with the AD/LDAP system and the MFT solution will not normally track the passwords. When a user presents their credentials to login to the MFT solution, the system will pass the username and password to the AD/LDAP source for verification. If the AD/LDAP system confirms the credentials are correct the MFT solution lets the user in. As there is usually no caching of credentials, if a user changes their password on the AD/LDAP system then that password is reflected instantly in the MFT system.

Increasing the security of using AD/LDAP to authenticate user credentials, RADIUS solutions using time limited one-time password tokens or even SMS messages can be integrated to provide an extra level of security.

In RADIUS authentication, the user or device sends a request to the MFT system to gain access to a particular network resource, then the system passes a RADIUS Access Request message to the RADIUS server, requesting authorization to grant access via the RADIUS protocol. RADIUS servers vary, but most can look up client information in text files, LDAP servers, or databases. The RADIUS server can respond with an Access Reject, Access Challenge, or Access Accept. If the RADIUS server responds with an Access Challenge, additional information is requested from the user or device, such as a secondary password. Access Accept and Access Reject allow or reject the user access respectively.

Using AD/LDAP authentication or RADIUS authentication works well for users who are logging into the system interactively using either a web interface or file transfer client such as FileZilla, but do not work well for accounts which are used as a part of file transfer scripts.

The most popular method of securing these is to use “private key” or “key pair” authentication. With this the account does not use a defined password, but rather the MFT solution encrypts a token and sends that as a challenge to the client. This token is decrypted using the private half of the key at the client end and sent back unencrypted. If the tokens match the MFT solution accepts the user as verified and allows the account access. In this way any scripts which need to access the MFT solution do not need to have passwords encoded into them in raw text. Key pair authentication works with SSH keys for SFTP and SSL Certificates for FTPS and HTTPS connections.

With many more password breaches coming not from brute force attacks but from compromised authentication databases, experts are now advocating not making passwords longer or more complex but to implement Two Factor Authentication (2FA). This can be achieved using a combination of password and Private Key authentication or RADIUS in your MFT solution and works well for users and scripts.

Now maybe a good time to review your MFT password policies and maybe time I change some of my passwords too!!!

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.

File Transfer Enables Formula 1 Teams To Go Even Faster

File Transfer Enables Formula 1 Teams To Go Even Faster

The world of Formula 1 (F1) racing is one of high speeds, high stakes and leading edge technology. Drivers like Lewis Hamilton or Sebastian Vettel, are just the public face of a much larger team working together to build a car to go faster than their competitors. With stakes so high, racing teams are looking for ways in which they can gain a competitive edge.

And so data plays a key part in F1. Each car has hundreds of sensors monitoring every aspect of the car, generating data on fuel load, tyre temperature, brake performance and much, much more. During a race this data is typically streamed back to race HQ enabling in-race, real-time decisions to be made about race strategy.

Managed File Transfer in Formula 1

However, the sensors also create a large amount of data during pre-season testing, in simulation environments, during free practice before each race and during qualification, all of which needs analysing to gain the split second advantages that make the difference between first and second place.

The data produced is valuable, very valuable. The performance of the team has a clear, measurable impact on the prize money received but what’s not quite so easy to quantify is the effect on the brand. The prize money available to the racing teams on its own is astronomical.   Last season nearly $965m of prize money was split between the top 10 teams.   Ferrari won the prize money race, with $192m, even though they finished second in the constructors championship due to a special agreement with F1.

F1 2016 Prize Money

Typically, racing teams spend much of the year in countries around the world where Internet connectivity isn’t always brilliant. Coupled with the distance back to their HQ and factories, any data transfers suffer with long round trip times (RTT), high packet loss and latency on the connection. This combination of factors results in poor data throughput, which is where Managed File Transfer comes in.

Pro2col was approached by a Formula 1 team to help them solve the problems they were having with file transfers. They needed a system that was easy for the teams travelling the globe to send large data sets from their laptops back to their HQ in the UK. It needed to be able to cope with the challenging network environments that they would encounter before and during the race season, and of course, security was of paramount importance.

 

Conducting A Thorough Needs Analysis

Pro2col conducted a thorough needs analysis with the Formula 1 team to understand their exact requirements, which allowed us to then review the available technologies in the marketplace, scoring them against the defined requirements.

Whilst a variety of products would have ‘done the job’, the devil was in the detail of not whether the software would do it but how. Eventually, two technologies were recommended for evaluation. Pro2col’s technical team provided assistance in setting up the software and establishing the success criteria for the proof of concept, which resulted in both technologies performing as expected.

Working with the Formula 1 team, we established that one of the technologies had a much more competitively per-user, pricing structure with a clear growth path as they looked to expand it into other areas of the business, for example when sharing data with customers and suppliers.

Now seven years since the original implementation of the software, the F1 team continues to perform well on the track and grow the footprint of its Managed File Transfer solution, adding further licences when the need arises.

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.

Disaster Recovery in Managed File Transfer

Disaster Recovery in Managed File Transfer

There is an increasing reliance on using high availability to protect Managed File Transfer (MFT) systems, and indeed most MFT vendors provide a robust solution, often offering both Active-Active or Active-Passive configurations.  There are however many circumstances where using high availability is simply not an option, whether due to cost, infrastructure or some other reason.  In this case it is necessary to revert to the ‘old school’ way of doing things using some form of backup-restore mechanism to offer disaster recovery.

Just to be clear to those who have a different understanding to me regarding the difference between high availability and disaster recovery, this article is based upon the following understanding:

In high availability there is little or no disruption of service; after part of the infrastructure fails or is removed, the rest of the infrastructure continues as before.

In disaster recovery, the service is recovered to a new, cold or standby instance, either by automatic or manual actions.  This includes VM snapshots and restoring to a non-production environment.

Planning Ahead

It goes without saying that disaster recovery isn’t something that you can easily achieve on the fly; it’s important to have detailed plans and practice them regularly until recovery practices always complete flawlessly.  When you start planning for disaster recovery the very first question should be “What should my recovery environment look like?”

This might sound like a strange way to start, but just take a moment to consider why you have a Managed File Transfer system in the first place.  Do you need the data stored in it to be available following the recovery or just the folder structure?  It’s a best practice policy not to leave data in the MFT system for long periods of time – it should contain transient data, with an authoritative source secured elsewhere.  If you continue with this train of thought, think about how valid the content of any backup would be if (for example) it is only taken once per day.  Potentially that could mean 23 hours and 59 minutes since the previous backup; a lot can change in that time.

Similarly, consider that you may have another system sending data into MFT on a frequent basis; if that system needs to be recovered (due perhaps to a site outage), then you will need to find a common point in time that you are able to recover to, or risk duplicate files being sent following recovery activities (see RPO below)

Should your recovery environment be similarly sized to the production environment?  Ideally, the answer is always going to be yes, but what if your production system is under used, or sized to take into account periodic activities?  In that case, a smaller less powerful environment may be used.

RTO and RPO

Recovery Time Objective (RTO) and Recovery Point Objective (RPO) are the two most critical points of any disaster recovery plan.  RTO is the length of time that it will take to recover your MFT system; RPO is the point in time that you will set your MFT system back to – frequently this is the last successful backup.  As already mentioned, you may need to synchronise the RPO with other independent systems.  Once you have decided upon the RPO, you need to plan how you will handle transfers which may have occurred since that time; will they be resent, or do you need to determine which files must not be resent?  Will you need to request inbound files to be sent again?

You can decide on RTO time only by executing a recovery test.  This will enable you to accurately gauge the amount of time the restore process takes; remember that some activities may be executed in parallel, assuming available resources.

Hosting of MFT systems on Virtual Machine (VM) farms has changed the way that we consider RPO and RTO somewhat.  In general, VMware allows us several possibilities for recovery, including:

  • A system snapshot taken periodically and shipped to the recovery site
  • Replication of the volume containing the VM (and generally several other VMs)
  • Hyper-V shared cluster environment

Of these, Hyper-V probably comes closest to being a high availability alternative, however it should be remember that it uses asynchronous replication of the VM; this means that there is a loss of data or transaction, albeit a small one.

The Recovery

Let’s assume that you’ve decided to avoid the RPO question by presenting an ‘empty’ system in your recovery site.  This means that you will need to periodically export your production configuration and ship it to the recovery site.  Ideally, you would want to do this at least daily, but possibly more frequently if you have a lot of changes.  Some MFT systems allow you to export and ship the configuration using the MFT product itself – this is a neat, self-contained method that should be used if it’s available.  In this way you are more likely to be sure to have the very latest copy of the configuration prior to the server becoming unavailable.

The actual MFT software may or may not be installed in advance, depending upon your licence agreement (some vendors permit this, others not – be sure to check as part of your planning).  In any event, it is best to keep the product installation executable(s) on the server in case they are required.

So next on the list of things to think about is; what else do you need to complete the recovery?  Unfortunately, the answer can be quite long:

  • DNS
    Can the new server have the same IP address as the old?  Do you need to add it?  The new server may well be on a completely different subnet.
    If you are using DNS C records to reach the server, where are they updated?
    Is there a load balancer to be updated?
    Does the recovery server have the same firewall rules as the production server?
    Are you using a forward proxy to send traffic out of the network, and if so will it present the same source IP address?
    If you have multiple sites defined in your MFT system, does each have a unique IP address?
  • Keys and Certificates
    Are these included as part of the system configuration or do they have to be handled separately?
    Are PGP key-rings held in the home directory of the account that the MFT systems runs under?
  • User Accounts
    Does your configuration export include locally defined users?  Do you make use of local groups on the server which may not be present on the recovery server?
    Will LDAP/LDAPS queries work equally well from this machine?

Returning to Normal Operations

Sooner or later you will need to switch back operations to the normal production environment.  Unfortunately, this isn’t always as straightforward as you could wish for.

When disaster struck and you initiated your disaster recovery plan, you were forced into it by circumstances.  It was safe to assume that data was lost and the important thing was to get the system back again.  Now however your recovery environment may have been running for days and it will probably have seen a number of transfers.  At this point you need to ‘drain’ your system of active transfers and potentially identify files which have been uploaded into your MFT but have not yet been downloaded.

Some MFT systems keep track of which files have been transferred (to avoid double sending); if your MFT system is one of these, then you will need to ensure that the production system knows which files the recovery system has already handled.  Regardless of this, you will need to ship your configuration back to the production server in order to accommodate any changes that have occurred – for example, users being created or deleted, or even simply changing their password.

Synchronise the return with other applications that send data through the MFT system in order to avoid data bottlenecks from forming during the move; remember that any DNS changes you need to make at this time may take some time to be replicated through the network.

Keeping the Recovery System Up To Date

Of course, any time you make an update to your production environment, it could easily invalidate your recovery environment.  An example might be something as simple as resizing a disk, or adding a new IP address – both of these activities should hopefully be covered by change management practices, but of course we all know that replication of changes into the recovery environment doesn’t always happen.  This is why it’s so important to perform regular disaster recovery exercises every six months or so, so that you can identify and resolve these discrepancies before a disaster occurs.  When considering what changes need to be replicated, look again at the areas you need to consider when first setting up the recovery environment.

Ramifications of Not Being Disaster Ready

For many organisations, their MFT system is far more important than people realise.  An unplanned outage will prevent goods from being shipped, orders placed and payments being sent, which obviously has a negative impact at not only a financial level, but also in other intangible levels that aren’t so easily quantifiable.  How likely are customers to use your services in the future if they can’t rely on their availability now?

There’s also the certification aspect to consider.  If you are considering ISO 27001 certification, you need to have a realistic plan in place, test and maintain it – neglecting this will result in an audit failure and potential loss of certification if it has already been delivered.

Finally, the most important thing to do is document EVERYTHING.  Every step should be able to be followed by someone without specific knowledge of the system.  Every change should be recorded, every test detailed, regardless of success or failure.

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.

Metadata in the Managed File Transfer Space

Metadata in the Managed File Transfer Space

One of the limitations of using any file transfer protocol is describing the file that is being transferred.  In early iterations of many (though not all) solutions, this was not even a consideration – if you needed to add some information, you included a header, or possibly just another file to describe the first.  This was (and still is) very cumbersome, requiring a file to be opened just to determine its content.

Zenodotus

Someone who was way ahead of the game on this issue was the ancient scholar and literary critic Zenodotus, who at around 280BC was the first librarian of Alexandria.  Zenodotus organised the library by subject matter and author, but more importantly to this blog, attached a small tag to each scroll describing the content, title, subject and author.  This approach meant that scholars no longer had to unroll scrolls to see what they contained, and is the first recorded use of metadata.

In IT terms, metadata came in to play in the 1970s, as an alternative method to locating data when designing databases, but it really became established as an integral part of data manipulation when XML became popular for web services.

Metadata in MFT

In terms of Managed File Transfer (MFT), if we consider a file being transferred as analogous to a scroll, we might use the metadata ‘tag’ to record things about the file – the person sending it, its content, its final recipient and perhaps a checksum hash.  The possibilities for use are endless and we very quickly get to a point of wondering how we ever got by without it.

But before you start googling how to add metadata in a traditional transfer, you should be aware that the only metadata you are likely to be able to successfully access by FTP or SFTP are the filename and creation date (occasionally permissions or ownership too, depending upon the system).  Obviously, this isn’t too useful when describing the data – what’s required is a little help from the file transfer vendors.  This is normally delivered to end users via a webform – a HTML based form field containing several input fields completed at upload time – or via some form of API.  The metadata is then stored either in XML files, or more commonly a database, from where it can be related to the files and queried as required.

What do I do with the Metadata?

Generating a webform for uploading metadata in a Managed File Transfer system is actually quite simple – the challenge comes later when trying to maintain the relationship to the file; for example, will your automation engine be able to (a) read the metadata, and (b) act upon the metadata to determine what to do with the file.  It is quite straightforward to plan, but unfortunately not so simple to implement.

Some vendors have a slightly more advanced workflow methodology than others – if webforms and metadata are necessary for your environment, then it may be worthwhile looking at out-of-the-box solutions, rather than coding your own.  The challenges around building, securing and maintaining a webform and workflow combination frequently outweigh the costs of such a system.  Without doubt however, all the major MFT vendors provide some form of webform integration to one extent or another.  Metadata is here to stay in the world of MFT, but at the time of writing this there is no industry standard, clear winner or even preference in direction.

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.

What is Managed File Transfer?

What is Managed File Transfer?

 

So, what is Managed File Transfer and why is it important?

In the digital economy, more than a third of all business-critical processes involve file transfers. Managed File Transfer is a solution to enable the secure transfer of data between two or more locations, connected via a network. It is typically delivered as on-site software but can also be also offered as a cloud-service.

Managed File Transfer offers a comprehensive set of features aimed at replacing: insecure, legacy FTP servers, home-grown, bespoke file transfer solutions, physical shipment of media (e.g. USB devices, DVD’s & HDD’s), consumer grade cloud based services, large email attachments or the installation of expensive point to point leased lines and VANs.

Managed File Transfer can be a powerful business enabler that reduces costs and risk, improves efficiency and agility, and opens the door to new mobile, cloud and big data initiatives.

Managed File Transfer

A recent study by Aberdeen Group* showed that 65% of businesses that implemented Managed File Transfer, did so to improve productivity. Managed File Transfer solutions typically have a range of features, which enhance productivity whilst also improving security. For example they:

  • Centralise support for multiple file transfer protocols including FTP/S, OFTP, SFTP, SCP, AS2, and HTTP/S.
  • Encrypt files throughout the file transfer process (in transfer, at rest even through automation), is managed centrally via a simple to use software interface.
  • Automate file transfer processes with trading partners, from payroll to planning.
  • Detect and handle failed file transfers including notification and initiating remedial action.
  • Authenticate users against existing user repositories such as LDAP and Active Directory
  • Integrate to existing applications using documented APIs (application programming interfaces) to reduce the costs and risks of manual intervention
  • Generate detailed reports on user and file transfer activity to highlight any areas requiring further improvement, or training.
  • Monitoring and dashboards provides a real time view of your business critical workflows.

Gartner* highlights that the two key differences between standard FTP servers and those features provided by Managed File Transfer are the ability to “manage” and the ability to “monitor”. Gartner defines these as:

  • Manage – means to manage all file transfers using one interface (one place) across all business units, operations, systems, applications, partners, etc.
  • Monitor – means to monitor all file transfers in one centralised location which in turn means better governance, compliance and reduced IT costs.

Managed File Transfer projects can range in size, scale and scope, addressing tactical problems to delivering a strategic solution to manage all of a company’s data transfer requirements.

If you think your organisation could benefit from a Managed File Transfer solution, we have a range of resources to help get your project started. Why not download our Managed File Transfer comparison guide or look at our needs assessment options.

 

References*
From Chaos to Control, Aberdeen Group study, November 2013
Managed File Transfer Offers Solutions for Governance Needs, Gartner 2010

 

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.

Protecting Your Data At Rest – What Are Your Options?

Protecting Your Data At Rest – What Are Your Options?

Modern Managed File Transfer (MFT) solutions provide several ways to protect data. In addition to using secure protocols for data in transit and the protection against DDos, Hammering and Brute Force attacks, many solutions provide mechanisms for securing files at rest, while they are awaiting collection or processing.

Protecting data at rest

Protecting the files at rest can be achieved in several ways, with the most common being:

  • Writing to an Encrypted File store
  • Encrypting Data using PGP or similar
  • Securing them in another network segment

Encrypted File store

Encrypted file stores leverage either native encryption technology such as EFS, or use their own encryption methods to secure files stored in the data area of the MFT solution. Files are encrypted before they are stored so there is no requirement to manage keys.  Decrypting the data is also done on the fly, when a file is downloaded through the software. Browsing to the storage location from the operating system may show either the real file names or an anonymised series of files. The downside to this method is that data written to a windows share is not accessible to other applications except via the solution ie via an API.

If your MFT solution does not support encryption at rest natively, then there are several network storage devices which can present encrypted storage as a normal CIFS share. Using this as storage for your MFT solution will protect your data from physical theft but may not protect from access by internal users or systems.  Not all MFT solutions can be integrated with this type of encrypted storage device.

Use PGP

Another popular method is to secure the data using PGP. PGP gives you the option of encrypting a file outside of the MFT solution for full end to end security.  Alternatively, most MFT solutions support PGP encryption and decryption for incoming and outgoing files.  PGP encryption applied by the MFT system is triggered once a file has been successfully uploaded.  Once the file is PGP encrypted, it can be sent over to a remote system where it will need to be decrypted. While this process has many positives, not all MFT solutions support PGP encryption on the fly.  The MFT solution must wait for the file to be uploaded and stored unencrypted, before it attempts to PGP encrypt it. This means that there is a short period of time where the file will sit on the storage in an unencrypted state and only once the encryption process has completed successfully will the unencrypted version of the file be deleted. As this whole encryption process only takes a few seconds, the exposure of the data unencrypted is minimal and many organisations are happy with the risk of temporarily unencrypted data.

Network Segments

An alternative approach to protecting your data at rest is to use the forward/reverse proxy capabilities for MFT solutions. This adds an extra layer of defence to your MFT system’s security. As no data is stored on these proxies, any external attack that managed to compromise the proxy server, would not be able to access any data as it is safely stored on the main MFT server behind another firewall. Just like the encrypted file stores, these gateways are completely transparent to the end users.

Each of these measures help protect data at rest and they can all be combined to give a high level of protection. These methods can assist in meeting regulatory compliance such as PCI DSS, ISO 27001, etc.

GDPR

ONLY 15 MONTHS TO GO!! – Are you ready?