Where is your data going and why? Wizuda GDPR features

Where is your data going and why? Wizuda GDPR features

Are your impact assessments and reporting procedures in place for GDPR? Danielle Cussen from Wizuda examines these important requirements in her guest blog post, ‘Where is your data going and why?’ Danielle explains what you need to do to comply and how Wizuda GDPR features will simplify compliance. Danielle is Managing Director at Wizuda.

 

 

It’s rare that a day goes by without a mention of the GDPR. Businesses across the globe are striving to achieve compliance by 25 May 2018. That’s when it comes into full force, with no grace period. The GDPR applies to any business collecting or processing personal data belonging to EU citizens.

The UK’s ICO issued a 12 step guide to preparing for the GDPR. The first step is about being aware of the GDPR and its impact. The second is about finding out what personal data you hold, where it’s collected from and where it’s being sent.

As you can imagine, a lot of these questions are going to land on IT’s desk. IT will need to identify transfers between systems and between internal departments, plus transfers to external parties. This might include third party data processors, within the EU and across the globe. IT will need to work with other stakeholders across the business who understand the data and the reasons for the transfer.

Wizuda GDPR impact assessments

Under the GDPR it is now mandatary to conduct Data Protection Impact Assessments (DPIA) wherever there is a possible high risk (Article 35). If the risk level is unknown, doing an impact assessment is probably a good way to find out. Impact assessments will vary across organisations and departments but you’d expect to see certain questions where data transfer is concerned. These would relate to the sensitivity of the data, whether it’s being sent within or outside the European Economic Area (EEA), who will have access to it, and the risk category, among other factors.

Wizuda allows users to build their impact assessments within the software. Users complete a question set, which forms the impact assessment. The system then guides users through the transfer process based on the requirements they have set out. For example, if the user has specified that the data needs to be encrypted in transit, it will guide them towards using SFTP or HTTPS. The system also guides users through any approval process.

This feature helps users to check their transfers are aligned to the requirements specified in the impact assessment. The impact assessments themselves are readily available for reporting and auditing purposes.

Wizuda GDPR reporting

For the GDPR, reporting visibility is key to compliance. Article 5 (and many others) stress the need for “accountability” and “transparency” over all processing activities, not just cross-border transfers. IT need to be able to provide accurate details of the transfers in place at any given time. This is not just come 25 May 2018, but on an on-going basis. They may need to show all of the cross-border transfers outside of the EEA, with impact assessments showing the business reason and sign-off process. An automated process reduces this workload and provides process assurances.

A number of Wizuda features assist the user in accurate reporting of data transfers:

 

Wizuda’s Geographic Visual Maps show real live transfers that are in place across your organisation from one central hub. This view can be filtered by region, such as EEA, Non-EEA, BCR, Model Contracts and so forth.

 

 

 

 

Alternatively, Diagrams can be used to visualise the data flows across your network.


 

 

 

 

Both the Geographic Maps and Network Diagrams have full drilldown capability to view details of the files transferred, the full audit trail, authorisation workflows, and the corresponding impact assessments where applicable. This simplifies the path to demonstrating compliance.

There’s more information available on the Wizuda vendor page.

This is the first in a series of guest blog posts from the leading vendors, highlighting how a file transfer solution can add value to your organisation.

Related links

Are you reviewing your data transfer and file sharing processes and systems for GDPR compliance? Pro2col’s GDPR White Paper is an essential read for you.

Pro2col’s GDPR Advisory Service offers pre- and post-implementation planning options, depending on which stage your organisation is at.

Resources Available For You

Do you need a File Transfer solution?

Questions regarding need for File Transfer 

get-the-guide

Find out your File Transfer requirements!

“Needs Analysis Service for File Transfer”

get-the-guide

Compare the software on the market!

“Managed File Transfer Comparison guide”

get-the-guide

Managed File Transfer Comparison Guide

Managed File Transfer Comparison Guide 

[Updated – September 2017]

Our comparison guide is aimed at businesses of all shapes and sizes wanting to compare Managed File Transfer software. It answers the main questions we’re regularly asked about data transfer solutions and speeds up your selection process by allowing you to compare products side-by-side.

Unlike software vendors, who will obviously want to sell you their product, this guide gives an impartial comparison. That’s because Pro2col are independent experts in data transfer, able help you select and implement the best technology for your business requirements and budget.

Before downloading the guide though, you need to make sure you’re ready to compare solutions. To do this you must understand exactly what you and all your stakeholders within the business need, both now and in the future. Without this information you could source the wrong solution and that will cost you in the long run. If you need to do some more preparation, visit pro2col.com to access other free planning resources and find out about our needs analysis service.

If you’re confident that you’ve got all this information, then this comparison guide is for you. It’s an updated fourth edition and reviews ten instead of the original eight most cost effective, popular and features-rich products on the market. We’ve included a new section on compliance too, which is important to consider with General Data Protection Regulation (GDPR) around the corner.

It’s split into six sections:

  • Solution Basics answers the key questions that more or less everyone
    asks us, when looking for a data transfer solution.
  • Business Strategy prompts you to consider how your solution will be impacted by other policies within the business.
  • Technical Details looks at some of the key features of solutions at a more granular level.
  • Automation Options lists the most commonly required automation features; a key component of any Managed File Transfer solution.
  • Transfer Protocols reviews eleven of the most widely used file transfer delivery protocols.
  • Cloud Connectors lists eight of the most common cloud services that you’re likely to need to connect to.
  • Compliance lists common compliance standards, including GDPR.

Naturally this guide only includes so much detail, but it should give you a clear view of which features you need and which vendors are a good fit. If you have other questions then contact our team of pre-sales and technical consultants, who can provide product information, demonstrations and software evaluations.

If you find this comparison guide useful, then you’ll benefit from reading Pro2col’s other free resources, including hints on building a business case. You can access these from our website pro2col.com.

Resources Available For You

Find out your File Transfer requirements!

“Needs Analysis Service for File Transfer”

Includes;
Questionnaire to identify requirements
Analysis of results with a Pro2col expert
Recommended solution report
Up to one hour of consultation for Q&A

get-the-guide

Are you sharing data securely?

Are you sharing data securely?

Your employees need to share data between themselves to perform their roles effectively, but how do you ensure that this adheres to your organisations’ security policies? What can you do to control this and help them with sharing data securely?

With multiple employees now working from different sites or hot desking, it’s an area that can easily spiral out of control, so we have a possible solution for you to consider.

Let’s take a common example for many organisations. Employees often need to share data with external 3rd parties on an ad-hoc basis. For most of my time in IT, this has been done by sending an attachment in an email. Policies and procedures that users agree to upon employment and mail filter tools, such as Mimecast, would also be options that should be put into place to prevent data that needs to be secured being leaked via e-mail.

However, this doesn’t really address the issue. Sending files by e-mail invariably causes issues at the mail server stage, where space is generally a premium.  Mail sent to multiple recipients in the same organisation will result in numerous copies of the same file being stored, especially problematic when you consider that the majority of users don’t delete e-mails until their mailbox is full.  Additionally, resources on mail servers are often challenged just by handling e-mails with large attachments.  As a consequence, if a user runs into a block or needs a file which is going to be stopped by the mail server, then they may look for an alternative way such as a cloud based file sharing solution.

Several years ago, I was told by an IT manager of a large media company, that their organisation moved nearly a terabyte of data through file sharing services every month. They felt the cost of sharing the data by other means or the delays involved would actually harm the business. The problem was they had no control and didn’t know if the data was authorised to be shared or where the data was going.

Sharing data securely with Ad-Hoc module messaging via an MFT

Using an Ad-Hoc messaging module of a Managed File Transfer (MFT) solution would have allowed them to block sharing sites from all users, yet still allow users to share data in a controlled mannerAd-Hoc messaging (sometimes referred to as EFSS or Electronic File Sync & Share) allows clients to exchange e-mails containing hyperlinks to files, rather than the files themselves; these are stored in a web enabled file transfer server, which both the sender and recipient are granted access.  Although it is clearly desirable to remove the attachments passing through the mail server, it does highlight potential failings around the governance of the data entering or leaving the organisation, such as Data Loss Prevention (DLP) and virus-checking.

MFT solutions now integrate into Anti-Virus (AV) and DLP solutions using an ICAP (Internet content Adaption Protocol) connector. When a file is shared, the MFT solution passes the file and other metadata to the DLP solution using the ICAP protocol. Based on its content, the DLP server will then check if the file should be sent. If the file is allowed, then an “OK” message is sent to the MFT server and the ad-hoc notification message is sent. If the file is blocked by the DLP server, then the MFT gets a “not OK” message and the server does not send the notification mail. The file is then deleted so it is not cached. Incoming and outgoing files can also pass through an AV Scanner using a similar method to ensure that malicious code is not being shared.

When you installed MFT, ICAP may not have been included in your AV or DLP solution, but most now offer it, so it’s definitely worth reviewing your integration options.

ICAP is not a perfect solution and has some technical drawbacks. For example, most ICAP based solutions (and there are only a few) require that you provide the ICAP interface by way of a proxy, which will not necessarily interact well with every MFT solution – be sure to check which specific products are supported by your MFT system.  Another potential issue can be the length of time required to transfer large files to the ICAP server for inspection – in some cases this may result in a perceived lag during the sending of the Ad-Hoc message.  However, combined with the Ad-Hoc module of an MFT solution, it allows the control of data in and out of an organisation to meet IT security policies without restricting the end users from performing their duties.

If you would like to investigate whether an MFT solution would be right for your organisation, you can check out our Expert guide to MFT which includes some questionnaires to help you. Alternatively, If you’d like to discuss your options, feel free to give our team a call on 0333 123 1240.

Resources Available For You

Do you need a File Transfer solution?

Questions regarding need for File Transfer 

get-the-guide

Find out your File Transfer requirements!

“Needs Analysis Service for File Transfer”

get-the-guide

Compare the software on the market!

“Managed File Transfer Comparison guide”

get-the-guide

Password Security in Managed File Transfer

Password Security in Managed File Transfer

Last week was “World Password Day”, a day designed to get people thinking about password security and hopefully change their passwords. I was surprised to see an article from Sophos that the average person has 19 passwords to remember and almost a third struggle with strong passwords.  With the raft of work systems, private emails, social media, online shopping and banking passwords I thought it would be many more. I did a quick tally of my online passwords and worked out I have in excess of 30 passwords, although most of the private account passwords are variations on 4 main passwords.  I worked for one very large organisation who insisted passwords were changed every month but suggested that you simply add the month digit to the end of your password, negating the password security almost entirely.

The full article from Sophos can be found here.

Having strong passwords and authentication methods for file transfer accounts is very important. There are several approaches for user authentication that are supported by most Managed File Transfer (MFT) solutions.

These are

  • Application Controlled
  • External source (AD / LDAP / Other source)
  • Advanced Authentication using RADIUS or a One Time Password system
  • Private key authentication

With application controlled authentication, the MFT solution will control the length, complexity, password history and password expiry using internal systems. Usually users will be prompted to change their passwords either by getting an email, or when they login.

This works well but, for users inside the organisation, passwords can drift out of sync and this can lead to increased issues as users are asked to remember more and more passwords to access different systems. In this case, we usually recommend that the MFT solution uses the internal Active Directory or LDAP source. This allows the user to use the same credentials that they login to their computers with. Responsibility for changing the password then resides with the AD/LDAP system and the MFT solution will not normally track the passwords. When a user presents their credentials to login to the MFT solution, the system will pass the username and password to the AD/LDAP source for verification. If the AD/LDAP system confirms the credentials are correct the MFT solution lets the user in. As there is usually no caching of credentials, if a user changes their password on the AD/LDAP system then that password is reflected instantly in the MFT system.

Increasing the security of using AD/LDAP to authenticate user credentials, RADIUS solutions using time limited one-time password tokens or even SMS messages can be integrated to provide an extra level of security.

In RADIUS authentication, the user or device sends a request to the MFT system to gain access to a particular network resource, then the system passes a RADIUS Access Request message to the RADIUS server, requesting authorization to grant access via the RADIUS protocol. RADIUS servers vary, but most can look up client information in text files, LDAP servers, or databases. The RADIUS server can respond with an Access Reject, Access Challenge, or Access Accept. If the RADIUS server responds with an Access Challenge, additional information is requested from the user or device, such as a secondary password. Access Accept and Access Reject allow or reject the user access respectively.

Using AD/LDAP authentication or RADIUS authentication works well for users who are logging into the system interactively using either a web interface or file transfer client such as FileZilla, but do not work well for accounts which are used as a part of file transfer scripts.

The most popular method of securing these is to use “private key” or “key pair” authentication. With this the account does not use a defined password, but rather the MFT solution encrypts a token and sends that as a challenge to the client. This token is decrypted using the private half of the key at the client end and sent back unencrypted. If the tokens match the MFT solution accepts the user as verified and allows the account access. In this way any scripts which need to access the MFT solution do not need to have passwords encoded into them in raw text. Key pair authentication works with SSH keys for SFTP and SSL Certificates for FTPS and HTTPS connections.

With many more password breaches coming not from brute force attacks but from compromised authentication databases, experts are now advocating not making passwords longer or more complex but to implement Two Factor Authentication (2FA). This can be achieved using a combination of password and Private Key authentication or RADIUS in your MFT solution and works well for users and scripts.

Now maybe a good time to review your MFT password policies and maybe time I change some of my passwords too!!!

Resources Available For You

Do you need a File Transfer solution?

Questions regarding need for File Transfer 

get-the-guide

Find out your File Transfer requirements!

“Needs Analysis Service for File Transfer”

get-the-guide

Compare the software on the market!

“Managed File Transfer Comparison guide”

get-the-guide

File Transfer Enables Formula 1 Teams To Go Even Faster

File Transfer Enables Formula 1 Teams To Go Even Faster

The world of Formula 1 (F1) racing is one of high speeds, high stakes and leading edge technology. Drivers like Lewis Hamilton or Sebastian Vettel, are just the public face of a much larger team working together to build a car to go faster than their competitors. With stakes so high, racing teams are looking for ways in which they can gain a competitive edge.

And so data plays a key part in F1. Each car has hundreds of sensors monitoring every aspect of the car, generating data on fuel load, tyre temperature, brake performance and much, much more. During a race this data is typically streamed back to race HQ enabling in-race, real-time decisions to be made about race strategy.

Managed File Transfer in Formula 1

However, the sensors also create a large amount of data during pre-season testing, in simulation environments, during free practice before each race and during qualification, all of which needs analysing to gain the split second advantages that make the difference between first and second place.

The data produced is valuable, very valuable. The performance of the team has a clear, measurable impact on the prize money received but what’s not quite so easy to quantify is the effect on the brand. The prize money available to the racing teams on its own is astronomical.   Last season nearly $965m of prize money was split between the top 10 teams.   Ferrari won the prize money race, with $192m, even though they finished second in the constructors championship due to a special agreement with F1.

F1 2016 Prize Money

Typically, racing teams spend much of the year in countries around the world where Internet connectivity isn’t always brilliant. Coupled with the distance back to their HQ and factories, any data transfers suffer with long round trip times (RTT), high packet loss and latency on the connection. This combination of factors results in poor data throughput, which is where Managed File Transfer comes in.

Pro2col was approached by a Formula 1 team to help them solve the problems they were having with file transfers. They needed a system that was easy for the teams travelling the globe to send large data sets from their laptops back to their HQ in the UK. It needed to be able to cope with the challenging network environments that they would encounter before and during the race season, and of course, security was of paramount importance.

 

Conducting A Thorough Needs Analysis

Pro2col conducted a thorough needs analysis with the Formula 1 team to understand their exact requirements, which allowed us to then review the available technologies in the marketplace, scoring them against the defined requirements.

Whilst a variety of products would have ‘done the job’, the devil was in the detail of not whether the software would do it but how. Eventually, two technologies were recommended for evaluation. Pro2col’s technical team provided assistance in setting up the software and establishing the success criteria for the proof of concept, which resulted in both technologies performing as expected.

Working with the Formula 1 team, we established that one of the technologies had a much more competitively per-user, pricing structure with a clear growth path as they looked to expand it into other areas of the business, for example when sharing data with customers and suppliers.

Now seven years since the original implementation of the software, the F1 team continues to perform well on the track and grow the footprint of its Managed File Transfer solution, adding further licences when the need arises.

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.

Disaster Recovery in Managed File Transfer

Disaster Recovery in Managed File Transfer

There is an increasing reliance on using high availability to protect Managed File Transfer (MFT) systems, and indeed most MFT vendors provide a robust solution, often offering both Active-Active or Active-Passive configurations.  There are however many circumstances where using high availability is simply not an option, whether due to cost, infrastructure or some other reason.  In this case it is necessary to revert to the ‘old school’ way of doing things using some form of backup-restore mechanism to offer disaster recovery.

Just to be clear to those who have a different understanding to me regarding the difference between high availability and disaster recovery, this article is based upon the following understanding:

In high availability there is little or no disruption of service; after part of the infrastructure fails or is removed, the rest of the infrastructure continues as before.

In disaster recovery, the service is recovered to a new, cold or standby instance, either by automatic or manual actions.  This includes VM snapshots and restoring to a non-production environment.

Planning Ahead

It goes without saying that disaster recovery isn’t something that you can easily achieve on the fly; it’s important to have detailed plans and practice them regularly until recovery practices always complete flawlessly.  When you start planning for disaster recovery the very first question should be “What should my recovery environment look like?”

This might sound like a strange way to start, but just take a moment to consider why you have a Managed File Transfer system in the first place.  Do you need the data stored in it to be available following the recovery or just the folder structure?  It’s a best practice policy not to leave data in the MFT system for long periods of time – it should contain transient data, with an authoritative source secured elsewhere.  If you continue with this train of thought, think about how valid the content of any backup would be if (for example) it is only taken once per day.  Potentially that could mean 23 hours and 59 minutes since the previous backup; a lot can change in that time.

Similarly, consider that you may have another system sending data into MFT on a frequent basis; if that system needs to be recovered (due perhaps to a site outage), then you will need to find a common point in time that you are able to recover to, or risk duplicate files being sent following recovery activities (see RPO below)

Should your recovery environment be similarly sized to the production environment?  Ideally, the answer is always going to be yes, but what if your production system is under used, or sized to take into account periodic activities?  In that case, a smaller less powerful environment may be used.

RTO and RPO

Recovery Time Objective (RTO) and Recovery Point Objective (RPO) are the two most critical points of any disaster recovery plan.  RTO is the length of time that it will take to recover your MFT system; RPO is the point in time that you will set your MFT system back to – frequently this is the last successful backup.  As already mentioned, you may need to synchronise the RPO with other independent systems.  Once you have decided upon the RPO, you need to plan how you will handle transfers which may have occurred since that time; will they be resent, or do you need to determine which files must not be resent?  Will you need to request inbound files to be sent again?

You can decide on RTO time only by executing a recovery test.  This will enable you to accurately gauge the amount of time the restore process takes; remember that some activities may be executed in parallel, assuming available resources.

Hosting of MFT systems on Virtual Machine (VM) farms has changed the way that we consider RPO and RTO somewhat.  In general, VMware allows us several possibilities for recovery, including:

  • A system snapshot taken periodically and shipped to the recovery site
  • Replication of the volume containing the VM (and generally several other VMs)
  • Hyper-V shared cluster environment

Of these, Hyper-V probably comes closest to being a high availability alternative, however it should be remember that it uses asynchronous replication of the VM; this means that there is a loss of data or transaction, albeit a small one.

The Recovery

Let’s assume that you’ve decided to avoid the RPO question by presenting an ‘empty’ system in your recovery site.  This means that you will need to periodically export your production configuration and ship it to the recovery site.  Ideally, you would want to do this at least daily, but possibly more frequently if you have a lot of changes.  Some MFT systems allow you to export and ship the configuration using the MFT product itself – this is a neat, self-contained method that should be used if it’s available.  In this way you are more likely to be sure to have the very latest copy of the configuration prior to the server becoming unavailable.

The actual MFT software may or may not be installed in advance, depending upon your licence agreement (some vendors permit this, others not – be sure to check as part of your planning).  In any event, it is best to keep the product installation executable(s) on the server in case they are required.

So next on the list of things to think about is; what else do you need to complete the recovery?  Unfortunately, the answer can be quite long:

  • DNS
    Can the new server have the same IP address as the old?  Do you need to add it?  The new server may well be on a completely different subnet.
    If you are using DNS C records to reach the server, where are they updated?
    Is there a load balancer to be updated?
    Does the recovery server have the same firewall rules as the production server?
    Are you using a forward proxy to send traffic out of the network, and if so will it present the same source IP address?
    If you have multiple sites defined in your MFT system, does each have a unique IP address?
  • Keys and Certificates
    Are these included as part of the system configuration or do they have to be handled separately?
    Are PGP key-rings held in the home directory of the account that the MFT systems runs under?
  • User Accounts
    Does your configuration export include locally defined users?  Do you make use of local groups on the server which may not be present on the recovery server?
    Will LDAP/LDAPS queries work equally well from this machine?

Returning to Normal Operations

Sooner or later you will need to switch back operations to the normal production environment.  Unfortunately, this isn’t always as straightforward as you could wish for.

When disaster struck and you initiated your disaster recovery plan, you were forced into it by circumstances.  It was safe to assume that data was lost and the important thing was to get the system back again.  Now however your recovery environment may have been running for days and it will probably have seen a number of transfers.  At this point you need to ‘drain’ your system of active transfers and potentially identify files which have been uploaded into your MFT but have not yet been downloaded.

Some MFT systems keep track of which files have been transferred (to avoid double sending); if your MFT system is one of these, then you will need to ensure that the production system knows which files the recovery system has already handled.  Regardless of this, you will need to ship your configuration back to the production server in order to accommodate any changes that have occurred – for example, users being created or deleted, or even simply changing their password.

Synchronise the return with other applications that send data through the MFT system in order to avoid data bottlenecks from forming during the move; remember that any DNS changes you need to make at this time may take some time to be replicated through the network.

Keeping the Recovery System Up To Date

Of course, any time you make an update to your production environment, it could easily invalidate your recovery environment.  An example might be something as simple as resizing a disk, or adding a new IP address – both of these activities should hopefully be covered by change management practices, but of course we all know that replication of changes into the recovery environment doesn’t always happen.  This is why it’s so important to perform regular disaster recovery exercises every six months or so, so that you can identify and resolve these discrepancies before a disaster occurs.  When considering what changes need to be replicated, look again at the areas you need to consider when first setting up the recovery environment.

Ramifications of Not Being Disaster Ready

For many organisations, their MFT system is far more important than people realise.  An unplanned outage will prevent goods from being shipped, orders placed and payments being sent, which obviously has a negative impact at not only a financial level, but also in other intangible levels that aren’t so easily quantifiable.  How likely are customers to use your services in the future if they can’t rely on their availability now?

There’s also the certification aspect to consider.  If you are considering ISO 27001 certification, you need to have a realistic plan in place, test and maintain it – neglecting this will result in an audit failure and potential loss of certification if it has already been delivered.

Finally, the most important thing to do is document EVERYTHING.  Every step should be able to be followed by someone without specific knowledge of the system.  Every change should be recorded, every test detailed, regardless of success or failure.

Resources Available For You

The Expert Guide to Managed File Transfer

Includes definitions, requirements assessment, product comparison, building your business case and much more.

Managed File Transfer Needs Analysis

200 essential questions to consider before implementing your chosen Managed File Transfer solution.

Managed File Transfer Comparison Guide

A full feature comparison of Managed File Transfer solutions from the eight leading vendors in the industry.