Supply chain disaster: Do you need an MFT dev environment?

The reasons why you need an MFT dev environment

MFT dev environment - lorries in supply chain disaster

 

In all the years we’ve been working in file transfer, there have been a few occasions when we’ve witnessed the financial impact and reputation damage a system failure can have. This article looks at:

 

  • Why you should think twice before testing in a live environment;
  • When you need to consider a development (dev) environment for your Managed File Transfer (MFT) solution;
  • Details of the six stages for testing and development.

“A few years ago, one organisation was developing workflows in a live environment, and broke other automated processes. The system was down for just a few hours, but the impact was huge. This business supplied products to retailers across the country, but were unable to access the order information. The lorries couldn’t leave the factory and delivery drivers had to be paid overtime. Worse still, the retailers were left out of stock, consumers bought other brands and some ended up staying with that brand. The impact on the business’ finances and reputation were catastrophic.”

 

Richard Auger, Pro2col technical consultant

This particular example could have been prevented if the IT team were developing in a test environment, instead of a live environment. But so many organisations only have a live MFT production licence. That might be to save money, or because decision makers just don’t think a file transfer server needs a test licence. But we know an MFT system is doing so much more than transferring files, so if you have any workflows involved, you need to reconsider.

Is a dev environment business critical?

This will depend on the value of the data your system is handling. Is it critical to business processes? Do you risk breaching service level agreements (SLAs)? Or will you simply not be able to operate, like the example above? While you may be able to send files by some other method for a few hours, it isn’t viable for a sustained period.

You also need a change control policy to meet ISO27001 requirements. While it is down to you to determine the right policy for your unique set of circumstances, example ISO best practice advocates testing in an isolated, controlled and representative environment. Similarly ITIL requires an organisation to follow both ‘change management’ and ‘release and deployment management’ processes from non-production to production systems. It’s an old IT joke that in weaker, less secure environments TIP doesn’t mean ‘Transfer into Production’ – it ends up being ‘Test in Production’ instead.

So to avoid disrupting your system when deploying new releases, building workflows or making other changes, you should follow these six stages for testing, developing and transfer into production:

  1. Sandbox, or experimental environment: This is a local environment no one else can access, where the developer has a working copy of the code. Here they can try it out and change it without putting it live. This environment will typically be an individual developer’s workstation. Once they are happy with it the developer would submit the code to the repository for the next stage of development. Most MFT solutions by default don’t have a sandbox but you can sometimes set it up by installing the software onto a private virtual machine.
  2. Development or integration environment: This is a clean environment where you test how your code is interacting with all the other bits of code associated with the system. The code itself doesn’t get changed in this environment – updates are made to the working copy back in the sandbox and resubmitted. When ready, the developer accepts the code and it is moved to the test environment.
  3. Testing: This is the environment to test the new or changed code, either manually or using automated techniques. You may have different test environments to focus on different types of testing. The developer looks at how it interacts with and impacts other systems and tests performance and availability. If you are upgrading, for example, this will show how your system will behave once the upgrade is in place. From here, the code can be promoted to the next deployment environment.
  4. User acceptance testing (UAT) or quality assurance (QA): In this stage users will trial the software, making sure it can deliver against requirements. Stress testing is also carried out in this stage.
  5. Pre-production, or staging environment: This final stage tests in conjunction with all the other applications in the infrastructure. The aim here is to test all installation, configuration and migration scripts and procedures. For example, load testing happens here. It’s really important that this environment is completely identical to the production (live) environment. All systems should, for example, be the same version.
  6. Production or live environment: Transfer into production – or TIP – is the final stage, bringing the updates live. This is the environment that users actually interact with. This can be done by deploying new code and overwriting the old code, or by deploying a configuration change. Some organisations choose to deploy in phases, in case of any last minute problems.

If you follow these steps you can be confident that any upgrades to the production environment will be completed reliably and efficiently. But if your budget or internal policy won’t allow you to invest in all of these, we would recommend at least a test environment, which should be an exact copy of the production environment.

All our vendors offer test licences at reduced rates. If it’s time to get this set up for your MFT solution, get in touch now. You can contact us via the website or by emailing your account manager.

Interested in a file transfer solution?

Managed File Transfer in Action

Managed File Transfer in Action

A well known utilities company in Yorkshire were using multiple legacy systems and 2 disparate FTP solutions to move data into, out of and around their organisation.  These systems had grown organically over time to tackle isolated file sharing issues when they arose.  As it transpired, this approach left the company with an ungovernable mix of system to system and FTP solutions that required manual interventions and the ongoing revision of batch scripts.

utility_bill-300x200

The mounting costs generated by work duplication and management overheads, accompanied by the risk associated with the absence of failover was becoming a genuine concern.  Bearing in mind that these systems were executing business critical processes such as billing, debt management, banking and delivering mission dependent data to employees in the field – recreating these undocumented workflows in the event of a disaster would be costly.  Considering the sensitive nature of certain pieces of data moving through these workflows, securing data was also a priority.

Pro2col worked alongside the customer to develop an understanding of their processes and document their key requirements.  Armed with this information, we were able to identify the technologies that would meet these requirements, and help them through the selection and evaluation process.  Specifically, the company were looking to:

  1. Secure the sending and receipt of confidential business and customer data
  2. To further automate the retrieval of time sensitive data from remote systems to provide realtime updates of vital information to their workforce at regular intervals throughout the day.

In terms of features, the company were looking for:

  • A solution that would support FTP, SFTP/FTPS, HTTP/HTTPS.
  • A user-friendly GUI for administration and configuration as apposed to CLI and scripts.
  • The ability to schedule time or event driven actions.
  • Pre and post processing ability i.e.; archiving, moving, deleting files that have been processed.
  • The capability to report failed transfers and system problems.
  • Potential to integrate with HP OpenView for system reporting.
  • Ability to perform ad hoc file transfers manually and simply via web browser or email plugin.
  • Ability to run concurrent processes.
  • Automatic fail over to a backup system.
  • Compatibility with Windows 2008 R2.
  • Integration with Microsoft Active Directory.

Based upon the information we gathered through the consultancy process, we were able to recommend the most suitable solution to meet their objectives – in this case, a combination of Ipswitch MOVEit Central and MOVEit DMZ with the Ad Hoc Module.  MOVEit Central was specifically designed to automate a wide range of mission critical file transfers, enabling the company to automatically “pull, process, and push” all files to any platform, including network architectures, operating systems, and protocols.  It would integrate directly into their existing data workflows, consolidating their automated file transfer tasks and allowing IT staff to create/administer them via a user friendly GUI interface.  For the ad hoc aspect of their file transfer requirements, MOVEit DMZ with the ad hoc module provided a secure, end to end solution for employees to send and receive mission critical files.

This just gives you an idea of the potential of these solutions and the levels of automation that can be achieved.  Within an enterprise environment such as a large utility company, an managed file transfer solution can save hours of manual processing and ensure that all the information is where they need it, when they need it.  As with all of our customers, we’ll be working with this organisation in the months and years to come, and look forward to helping them achieve their maximum ROI.

Click here for more information on the Ipswitch file transfer products.

Click here if you are interested in the consultancy services, which helped this organisation identify the right solution for them.

Alternatively don’t hesitate to contact a Pro2col team member on 0333 123 1240, if you wish to discuss your particular file transfer requirements.