LogoLogo
  • What is LIBSAFE Go
  • LIBSAFE Go Benefits
  • CONCEPTS
    • Overview
    • Organize your content
    • Platform architecture
    • OAIS and ISO 16363
      • Understanding OAIS and ISO 16363
      • LIBSAFE Go support for OAIS Conformance
      • Planning for preservation
      • ISO 16363 certification guide
  • GET STARTED
    • Create a data container
    • Upload content
    • Download content
    • Introduction to metadata
    • Search
    • File versioning and recovery
    • Work with data containers
    • Functions
    • Storage mode transitions
    • Jupyter Notebooks
    • OpenAccess Integration
      • Transfer Connector
      • LIBSAFE Go Configuration
      • OpenAccess Configuration
      • Multilanguage
      • Supported Formats
      • Changelog
  • CONFIGURATION
    • Archive organization
    • Container templates
    • Configure metadata
    • Users and Permissions
  • DEVELOPER'S GUIDE
    • Functions
    • Using the API
    • Functions gallery
  • COOKBOOK
    • AWS CLI with LIBSAFE Go
    • Using S3 Browser
    • Using FileZilla Pro
    • Getting your S3 bucket name
    • Getting your S3 storage credentials
    • Advanced API File Search
    • Tips for faster uploads
    • Configuring Azure SAML-based authentication
    • Configuring Okta authentication
    • Create a manifest before transferring files
  • FILE BROWSER
    • Supported formats for preview
    • Known issues and limitations
  • Changelog and Release Notes
Powered by GitBook
On this page
  • Using the Management Interface
  • Using the API
  • Using the S3 protocol

Was this helpful?

  1. GET STARTED

Download content

PreviousUpload contentNextIntroduction to metadata

Last updated 3 years ago

Was this helpful?

LIBSAFE Go allows users to download content from the platform using several methods:

Using the Management Interface

  1. Locate the data container from which you want to download using the Containers menu section or by searching.

  2. Select Check-in in case you are not checked in the container, and you have the check-in/out enabled for the data container.

  3. In the data container page, choose Explore content:

  1. Select the file you want to download, right click on it and select "Download". You can select multiple files or folders, and the platform will create a ZIP file (named in the same way the first selected file) with them and start downloading it.

Note that certain limitations exist when downloading content using the browser:

  • Unless you use one of the methods described in the Data Integrity section, no strong integrity is provided.

  • For high volume uploads/downloads (by file number, size or both), the browser may be slower or not able to download your content. An S3 client is recommended.

Using the API

API examples here are just illustrative. Check the API documentation for additional information and all available methods.

The S3 protocol is the recommended way for you to upload or download content from the platform. It is the most performant, parallelizable and easy way to do it. Use the API only for small workloads and low concurrency.

  1. Sign in to the platform's Management Interface

  2. Obtain your API key by selecting your name and then Access Methods:

and then use the following method:

curl --request GET \
     --url "$your_platform_url/api/file/{your file ID}/download" \
     --header "Content-Type: application/json" \
     --header "authorization: Bearer $your_platform_api_key" \
     --data '{}' -L --output your_downloaded_file.txt

When you make this request, the platform is going to 1) verify that you have read permissions for the file, 2) create a pre-signed download URL valid for 20 minutes and 3) send you a 301 redirect to the pre-signed URL back. Make sure that you allow your script or tool to follow redirects (-L in curl).

Using the S3 protocol

  1. Sign in to the platform's Management Interface

  2. Click on your name and select Access Methods

  3. In the S3 compatible protocol section, click Regenerate

  4. Copy your Access Keys and Secret Keys and store them in a safe location **

Please note that the Secret Key will only be displayed once. It is possible to regenerate a key, but the old key will be invalidated and any process that uses it will receive an "access denied" error.

  1. Configure the AWS S3 CLI tool (or another S3 tool):

$ aws configure
AWS Access Key ID [None]: <your access key>
AWS Secret Access Key [None]: <your secret key>
Default region name [None]: (just press ENTER here for None)
Default output format [None]: (just press ENTER here for None)

Use:

  • Access Key: The one you obtained in the previous step.

  • Secret Key: The one you obtained in the previous step.

  • Region: Leave it blank for the default.

  • Output formats: Leave it blank for the default.

Your S3 client may also ask for:

  • S3 Endpoint and DNS-style bucket: Leave it blank for the default.

  • Chunk-size, set it to something between 3MB and a maximum of 3.9GB, with 50MB as the recommended chunk size.

Path to your files is created using the following convention:

s3://{S3 bucket name}/{container id}/{path to your file}

So you can use:

$ aws s3 cp s3://libsafes3bucket/5/myfile.jpg myfile.jpg
   dowload: s3://libsafes3bucket/5/myfile.jpg to ./myfile.jpg
  • S3 bucket: The S3 bucket in which your data container is located (libsafes3bucket in the example)

  • Data container identifier: To indicate the data container to upload to (5 in the example)

  • File path: To indicate the path and file name to upload to (myfile.jpg in the example).

You can use many S3 compatible CLI tools or GUI tools, as available in your environment. Make sure you check the guide, as every other tool is configured in a similar way.

When using a CLI tool, we recommend for you to use the

Depending on the region and other settings, the platform keeps your data container inside a particular S3 Bucket. All data containers in your instance may use the same S3 bucket or not. To obtain the Bucket Name associated to the data container you want to use to begin uploading any file, see .

Using S3 Browser
AWS CLI with LIBSAFE Go
Getting your S3 bucket name