Download content

LABDRIVE allows users to download content from the platform using several methods:

Using the Management Interface

1. Locate the data container from which you want to download using the Containers menu section or by searching.

2. Select Check-in in case you are not checked in the container and you have the check-in/out enabled for the data container.

3. In the data container page, choose Explore content:

4. Select the file you want to download, right click on it and select "Download". You can select multiple files or folders, and LABDRIVE will create a ZIP file (named in the same way the first selected file) with them and start downloading it.

Note that certain limitations exist when downloading content using the browser:

  • Unless you use one of the methods described in the Data Integrity section, no strong integrity is provided.

  • For high volume uploads/downloads (by file number, size or both), the browser may be slower or not able to download your content. An S3 client is recommended.

Using the API

API examples here are just illustrative. Check the LABDRIVE API documentation for additional information and all available methods.

The S3 protocol is the recommended way for you to upload or download content from the platform. It is the most performant, parallelizable and easy way to do it. Use the API only for small workloads and low concurrency.

1. Sign in to the LABDRIVE Management Interface

2. Obtain your LABDRIVE API key selecting your name and then Access Methods:

and then, use the following method:

curl --request GET \
     --url "$your_platform_url/api/file/{your file ID}/download" \
     --header "Content-Type: application/json" \
     --header "authorization: Bearer $your_platform_api_key" \
     --data '{}' -L --output your_downloaded_file.txt

When you make this request, LABDRIVE is going to 1)verify that you have read permissions for the file, 2)Create a pre-signed download URL valid for 20 minutes and 3)send you a 301 redirect to the pre-signed URL back. Make sure that you allow your script or tool to follow redirects (-L in curl)

Using an S3-compatible tool

You can use many S3 compatible CLI tools or GUI tools, as available in your environment. Make sure you check the Using S3 Browser guide, as every other tool is configured in a similar way. When using a CLI tool, we recommend for you to use the AWS CLI with LABDRIVE

1. Sign in to the LABDRIVE Management Interface

2. Click on your name and select Access Methods

3. In the S3 compatible protocol section, click Regenerate

4. Copy your Access Keys and Secret Keys and store them in a safe location

Please note that the Secret Key will only be displayed once. It is possible to regenerate a key, but the old key will be invalidated and any process that uses it will receive an "access denied" error.

5. Configure your preferred S3 CLI tool. The following example uses s3cmd:

$ s3cmd --configure

Access Key: AKIAR*********IDUP
Secret Key: OGtPO**************UT09
Default Region [US]:
S3 Endpoint [s3.amazonaws.com]: 

DNS-style bucket+hostname:port template for
accessing a bucket [%(bucket)s.s3.amazonaws.com]: 

New settings:
  Access Key: AKIAR*********IDUP
  Secret Key: OGtPO**************UT09
  Default Region: US
  S3 Endpoint: s3.amazonaws.com
  Encryption password:
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: True
  HTTP Proxy server name:
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] n

Save settings? [y/N] y
Configuration saved to '/home/libnova/.s3cfg'

Use:

  • Access Key: The one you obtained in the previous step.

  • Secret Key: The one you obtained in the previous step.

  • Region: Leave it blank for the default.

  • S3 Endpoint and DNS-style bucket: Leave it blank for the default.

  • If the transfer tool is asking for Chunk-size, set it to something between 3MB and a maximum of 3.9GB, with 50MB as the recommended chunk size.

6. Depending on the region and other settings, LABDRIVE keeps your data container inside a particular S3 Bucket. All data containers in your instance may use the same S3 bucket or not. To obtain the Bucket Name associated to the data container you want to use to begin uploading any file see Getting your S3 bucket name.

And then use:

$ s3cmd get s3://acme-labdrive/6352/somefile.xml some-file.xml

s3://acme-labdrive/6352/somefile.xml -> some-file.xml  [1 of 1]
 123456 of 123456   100% in    2s    51.75 kB/s  done

The URL is formed using:

  • Protocol prefix ("s3://") to indicate the S3 protocol

  • S3 bucket: The S3 bucket in which your data container is in. See Getting your S3 bucket name to get it.

  • Data container identifier ("6352/") to indicate the data container to upload to

  • File path ("somefile.xml") to indicate the path and file name to download from to.

Using XrootD

The XROOTD project aims at giving high performance, scalable fault tolerant access to data repositories. You can access your LABDRIVE content using XROOTD and you can make it available to unauthenticated/anonymous users.

Because a XROOTD protocol limitation, white spaces (blank space char) is not allowed in the file names. Files with white spaces (blank space) or other XROOTD non-supported characters, are not available when using this transfer method.

LABDRIVE supports read-only access using XROOTD for now.

1. Sign in to the LABDRIVE Management Interface

2. Click on your user name and select Access Methods (1)

3. Locate the XROOTD section and get your XROOTD server name, username and password. If the Password is not shown, select Regenerate to create a new one.

4. Use the xrootdfs client to perform desired actions such as ls:

Your files in the XROOTD server are in folders with the container id/name as the prefix. If your file is /Unstructured/datasets/myfile.dat in the container 142, the path to the file would be:

root://<your XROOTD host name>//142/Unstructured/datasets/myfile.dat

cat:

See xrdfs --help for available commands. Not all of them are supported in LABDRIVE.

5. You can also download files using xrdcp

Last updated