LogoLogo
  • What is LIBSAFE ADVANCED
  • LIBSAFE Advanced Benefits
  • CONCEPTS
    • Overview
    • Organize your content
    • Active escrow data protection
    • Platform architecture
    • OAIS and ISO 16363
      • Understanding OAIS and ISO 16363
      • LIBSAFE Advanced support for OAIS Conformance
      • Planning for preservation
      • ISO 16363 certification guide
  • GET STARTED
    • Create a data container
    • Upload content
    • Download content
    • Introduction to metadata
    • Search
    • File versioning and recovery
    • Work with data containers
    • Functions
    • Storage mode transitions
    • Jupyter Notebooks
    • OpenAccess Integration
      • Transfer Connector
      • LIBSAFE Configuration
      • OpenAccess Configuration
      • Multilanguage
      • Supported Formats
      • Changelog
  • CONFIGURATION
    • Archive organization
    • Container templates
    • Configure metadata
    • Users and Permissions
  • REPORTS
    • Introduction
    • Data Analytics Reports
    • Container Tab Reports
  • DEVELOPER'S GUIDE
    • Functions
    • Using the API
    • Functions gallery
  • COOKBOOK
    • AWS CLI with LIBSAFE Advanced
    • Using S3 Browser
    • Using FileZilla Pro
    • Getting your S3 bucket name
    • Getting your S3 storage credentials
    • Advanced API File Search
    • Tips for faster uploads
    • Configuring Azure SAML-based authentication
    • Configuring Okta authentication
    • Create a manifest before transferring files
  • File Browser
    • Supported formats for preview
    • Known issues and limitations
  • Changelog and Release Notes
Powered by GitBook
On this page
  • Parallelization of uploads
  • Upload prefixes and containers
  1. COOKBOOK

Tips for faster uploads

PreviousAdvanced API File SearchNextConfiguring Azure SAML-based authentication

Last updated 2 years ago

LIBSAFE Advanced offers a huge scaling capability when handling file uploads and downloads. While the downloads scale without limitation and without any particular approach, there are some recommended techniques for uploading content faster when you plan to upload volumes over 10TB.

Three main elements contribute to faster uploads. Combine them all to obtain the best performance:

  • Upload tool in use

  • Parallelization of uploads

  • Upload prefixes and containers.

You can use any S3-compatible tool to upload content to your Data Containers, but to get the best results, we recommend the use of the most recent version of the . Unlike other tools, it has been optimized for parallelization.

Some Linux distributions install older versions of the client by default. Make sure you are always using the latest version.

You can use the guide for examples on how to use this tool.

Parallelization of uploads

If you are planning to transfer a large amount of files, with a relatively small size each, you can benefit a lot from parallelizing multiple upload processes.

The AWS CLI tool already has some built-in parallelization, but to achieve better results, you can launch multiple processes in parallel.

Upload prefixes and containers

Amazon S3 supports a request rate of per second per prefix in a bucket. The resources for this request rate aren't automatically assigned when a prefix is created. Instead, as the request rate for a prefix increases gradually, Amazon S3 automatically scales to handle the increased request rate.

If there is a fast spike in the request rate for objects in a prefix, Amazon S3 might return 503 Slow Down errors while it scales in the background to handle the increased request rate. To avoid these errors, you can:

  • Configure your application to gradually increase the request rate and , and/or

  • Distribute objects and requests across multiple folders or containers, as the limit stated above is "per folder".

​

AWS CLI
AWS CLI with LIBSAFE Advanced
3,500 PUT/COPY/POST/DELETE and 5,500 GET/HEAD requests
retry failed requests using an exponential backoff algorithm