0333 123 1240 info@pro2colgroup.com

Which resiliency option is best for your Managed File Transfer solution?

Which resiliency option is best for your Managed File Transfer solution?

As a Managed File Transfer (MFT) solution becomes more business critical, an organisation needs to consider how to make their system more resilient to failure.

There are different ways of achieving this, depending on your chosen solution but before you think about adding resiliency there are a few things you will need to consider.

 

High Availability Vs Disaster Recovery

Highly Available (HA) Managed File Transfer systems typically rely on two or more nodes, which can each handle requests at the same time. When a single node fails, the other nodes carry on and can pick up the extra load from the failed node. In many cases MFT systems tend to use either a native clustering technology such as Windows Server Clustering or will have proprietary heartbeats which keep nodes aware of each other. When one node fails, only connections and transfers that are actively passing though that node are lost, while other transfers passing through other nodes are unaffected and keep being processed. The goal for a HA system is nearly zero downtime in the event of a failure.

Disaster Recovery (DR) systems are designed to provide resilience in the event of a more significant failure of the Managed File Transfer system. DR systems are typically based in a different location to the standard system and only accept connections and transfers when the main system is unavailable. Network routing, storage and database replication and other infrastructure changes may be needed to be completed before the DR system can be activated and this can lead to a period of service downtime while this takes place.

Active:Active or Active:Passive

When it comes to designing a HA system, most MFT solutions offer either an Active:Active or an Active:Passive configuration.

In an Active:Active configuration, two or more nodes are running and sharing resources such as storage and database links.

With Active:Passive HA configurations, all the load is passed through a single node and, when that fails, another node in the system will detect it and start all the services required to run the system. This will mean that all connections are lost in the event of failure but typically, service is resumed very quickly in the event of failure.

Stretch HA

Some MFT solutions are now able to offer a hybrid HA/DR type configuration, where nodes can be sited in different data centres and able to accept connections independently of each other. They will typically write to storage which is either replicated in near real time or will deal with synchronising data between nodes independently of any infrastructure. This is becoming more popular as it reduces the footprint of the overall MFT solution while providing the benefits of HA and DR architectures. The downside to this is that not every MFT solution can support this architecture and that the network infrastructure required is more complicated.

high-availability

Considerations

Before deciding on a resilient architecture you need to consider some or all of the following factors.

1. Infrastructure
HA and DR solutions need more architecture and are significantly more complicated to configure and manage. Additional servers, management interfaces and network resources all need to be factored in and available.

2. Networking
Global and Local load balancers are needed to route the traffic to the correct nodes. These are usually configured in either a “round robin” type of configuration where connections are passed to each node in turn, or for some more advanced load balancers connections are tracked and passed to the least busy node. In addition, networking routes and firewalls need to be configured for all nodes including passive or DR nodes. Specifically for DR, trading partners and end users may need to be able to send to alternative nodes.

3. Storage Replication
Typically HA solutions will share storage but for DR, solutions may need storage replication between data centres. If this is only happening on a schedule, then data may not be available when the DR system comes up, or in some cases, the system may not be able to come up until the data is replicated.

4. Database Clustering
In a similar way to data replication, databases for HA systems are typically run on replicated technology. If the same database is to be used between the data centres, synchronous database clustering would need to be used (rather than asynchronous approaches like mirroring or log shipping). This should be near instantaneous to avoid similar issues to Storage replication.

5. Proxies
Forward and reverse proxies can be very helpful to shield any node failures from incoming connections, as they typically “front” the systems and should remain unaffected by any node failure. This can be a big advantage for using these gateways but care needs to be taken if in a DR configuration that these must be failed over too in the event of a server failure.

6. Monitoring
Monitoring tools should be in place checking a variety of factors such as services running, and node statuses. If these detect a failure, alerts should trigger and if possible automated failover scripts should be executed to bring up any additional nodes or resources.

Using virtualisation tool to achieve DR

In order to simplify the whole HA/DR architecture, many organisations now leverage virtualisation tools such as VMware Site Recovery Manager (SRM) or vSphere vMotion. These can handle the moving of the virtual server instance to another virtual environment with all associated resources.

If servers are physical, some solutions offer DR tools that can automatically trigger remote instances of the services using simple monitoring.

 

In mission critical B2B workflows achieving maximum uptime is of paramount importance. The type of resiliency your business requires may even determine your choice of solution, however this may also be driven by infrastructure limitations or simply the way in which the vendor charges for the additional nodes required to deliver.

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.

Why Do File Transfer Systems Need a Health Check?

Why Do File Transfer Systems Need a Health Check?

A Managed File Transfer solution like many other IT systems, requires regular system maintenance to keep it performing at optimum efficiency. Luckily, most back end house-keeping tasks can be automated, but how the tool is configured may mean it’s not reaching optimal performance.inr_diagnosing_com

A health check can uncover bottle necks, security issues, performance gaps and any inconsistencies with best practice.

Configuration

Many MFT vendors pride themselves on the fact that their solution can be installed by any IT professional. However, this is only half the story. There can be hundreds of security settings inside the applications and knowing the implications of turning a particular feature on or off is not always obvious. Many of the systems have a set of default settings, which are a vendor’s view of compliance. These may not align with an organisations security policies, local legislation or particular client SLA’s.

For example, checking the environment an MFT solution is running on may lead into the following questions being raised; “Is the server being backed up?” or “Is the server running under the correct account credentials?” You might be surprised at the number of systems which are not.

Performance

MFT solutions in larger organisations may handle many thousands of transfers every hour in extreme cases; as a result ensuring the system is as lean as it can be is vital to meet increased demand. Many of the systems we look at have old, out of date, or even test processes configured which, while they may not move data, clutter up the systems or constantly fail, which can skew KPI reports.

In addition, removing unused or expired accounts and data can help free up capacity on the server. Workflows may be created when a system is installed, altered over time or duplicated. It may be possible to rebuild workflows to reduce complexity making them easier to troubleshoot. Other settings that frequently need changing are security protocols such as TLS or authentication sources. For initial implementation, the system may not have been connected to the organisation’s Active Directory for example, but now the number of users has grown, this would improve security.

Software version

MFT systems are typically much more reliable than legacy file transfer systems built into operating systems. File transfers only tend to fail, when the remote server has an issue or if someone changes the network. Combine this with the “if it’s not broken, don’t fix it” school of thought and the systems are often not looked at or kept up to date with the latest versions and patches. As vendors develop their solutions, new features are introduced and issues addressed. Sometimes these can then be leveraged to significantly improve either the performance of the system, or can add functionality offloaded elsewhere. A vendor maintenance agreement, is only valid if you are running a supported version of software, which can leave an organisation exposed if they have a major issue.

Keeping the MFT system software up to date means that the system can benefit from all the latest features and security patches.

Knowledge transfer

System Managers move on to new roles and are replaced. Often there is some handover of what the system will do and how to perform common operations on it. However, this is not always a complete training course and can be prone to mis-perceptions being propagated.

At installation time, several features may not have been activated as they were not required for initial deployment; however it may transpire that these features address needs that have subsequently appeared. Utilising features the MFT solution has can help the system to perform better and may allow the organisation to get more return on investment.

Is it future proof?

As an organisations’ requirements change over time, the initial installation and configuration can become inappropriate to fulfil the organisation’s needs. Preparing your system for either High Availability or Disaster Recovery implementations may not be as straight forward as installing new nodes. A health check at this point may throw up issues such as database locations or firewall problems, which if dealt with before any configuration changes, can make the whole process go much smoother.

 

A good regular health check ensures that your system is running efficiently and effectively. As Managed File Transfer becomes mission critical, a health check ensures an organisation isn’t taking any unnecessary or unknown risks.

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.