Aspera Delivers Major Advances in its Core Fasp High-Speed Transfer Platform
- ascp4 is the next-generation Aspera file transfer binary introducing a new architecture for ultra high-speed transfer of small files in addition to large data sets. The architecture transfers up to one million files per minute for even the smallest file sizes (e.g. <10KB) and achieves > 5Gbps transfer rates for global RTT and packet loss conditions (200ms / 2%).
- FASPStream transport expands Aspera’s FASP transport technology for “live” and “near live” data streaming. Aspera FASP is a patented transport protocol for achieving highly efficient bulk data transfer over IP networks, independent of distance and quality (round-trip latency and packet loss). FASPstream expands this capability to provide a fully reliable streaming protocol for in-order data delivery over Internet WANs with excellent quality and negligible start up delay.
Advances in Aspera Direct-to-Cloud Transfer and Autoscaling Aspera Direct-to-Cloud Storage transfer capability moves even the largest (4K) media formats from source directly to object storage destination with native I/O for all major cloud storage providers: IBM SoftLayer Swift, AWS S3, Microsoft Azure Blob, Akamai NetStorage, Limelight Orchestrate Cloud Storage, Google Cloud Storage, and HDFS (beta). Version 3.6 brings many new capabilities to all Aspera On Demand products such as:
- Server-side encryption at-rest in addition to client-side, as well as in-transit encryption
- Automatic determination of the cloud storage part size allowing for the largest file sizes to be sent without specific configuration.
- New clustered transfers enable 10 Gbps+ transfers in/out/between clouds.
- A new Aspera Transfer Cluster Manager (ATCM) with Autoscale technology providing elastic scaling, a multi-tenant access key system for security and transfer reporting, and automatic high availability. The ATCM is a cloud-infrastructure independent service created by Aspera that allows for dynamic, real-time scale out of transfer capacity with automatic start/stop of transfer server instances, automatic balancing of client requests across available instances and configurable service levels to manage maximum transfer load per instance, available idle instances for “burst” and automatic decommissioning of unused instances.
Advances in Transfer Automation, Synchronization and Management
- A new Aspera WatchFolder Service is specifically designed to power high-volume automated file and directory transfers with advanced features for media workflows and content distribution.
Built on asperawatchd, a new file system notification service, designed for speed, scale and distributed change watching, watch directories can now cover huge files systems and large numbers of watch folders. Collections of files and folders can now be grouped into a single “drop” and transferred to remote nodes as a single logical unit and controlling which files arrive last. “Growing”(in progress) files are fully supported. A RESTful API enables programmatic control for customized and automated processing.
- Aspera Sync 3.6 also integrates asperawatchd technology to capture changes on any local or shared storage client host (CIFS, NFS, etc.) and aggregate all changes in real time in a single snapshot for speed on very large file systems. File attribute changes (Windows ACL, Unix ownership) are syncd, even when content is not updated, and LZ compression achieves high performance on low capacity networks. Bidirectional synchronization now works on cloud storage as well as block storage.
- Aspera Console 3.0 for Centralized Management is built on a new architecture with the ability to precisely regulate reporting load as the number of transfer nodes increases, dramatically improving scalability, robustness, and timely transfer status for large deployments. Many new settings and options have been added, including advanced email notification triggers on any transfer attribute such as source and destination path, Aspera Shares and Faspex user attributes, and transfer directories; new managed node clusters allow managed nodes to be assigned to cluster groups with shared storage to support automatic load balancing and failover of transfers; and an advanced search of transfer history searches by transfer names, IDs, contacts, paths, and status.