Upload content

The platform allows users to upload content to the platform using several methods.

Using the Management Interface

1. Locate the data container you want to upload to using the Containers menu section or by searching.

2. Select Check-in in case you are not checked in the container and you have the check-in/out enabled for the data container.

3. In the data container page, choose Explore content:

4. Drag and drop the files you want to upload to the files and folders area:

Note that certain limitations exist when uploading content using the browser:

  • Unless you use one of the methods described in the Data Integrity section, no strong integrity is provided.

  • For high volume uploads/downloads (by file number, size or both), the browser may be slower or not able to upload your content.

Using an S3-compatible tool

You can use many S3 compatible CLI tools or GUI tools, as available in your environment. Make sure you check the Using S3 Browser guide, as every other tool is configured in a similar way. When using a CLI tool, we recommend for you to use the AWS CLI with LABDRIVE

When uploading using this method, make sure the client is using integrity verification on upload and/or use any of the other methods described in the Data Integrity section.

1. Sign in to the Platform's Management Interface

2. Click on your name and select Access Methods

3. In the S3 compatible protocol section, click Regenerate

4. Copy your Access Keys and Secret Keys and store them in a safe location. Note that more than one set of credentials can exist.

Please note that the Secret Key will only be displayed once. It is possible to regenerate a key, but the old key will be invalidated and any process that uses it will receive an "access denied" error.

5. Configure your preferred S3 CLI tool. The following example uses s3cmd:

$ s3cmd --configure

Access Key: AKIAR*********IDUP
Secret Key: OGtPO**************UT09
Default Region [US]:
S3 Endpoint [s3.amazonaws.com]: 

DNS-style bucket+hostname:port template for
accessing a bucket [%(bucket)s.s3.amazonaws.com]: 

New settings:
  Access Key: AKIAR*********IDUP
  Secret Key: OGtPO**************UT09
  Default Region: US
  S3 Endpoint: s3.amazonaws.com
  Encryption password:
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: True
  HTTP Proxy server name:
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] n

Save settings? [y/N] y
Configuration saved to '/home/libnova/.s3cfg'

Use:

  • Access Key: The one you obtained in the previous step.

  • Secret Key: The one you obtained in the previous step.

  • Region: Leave it blank for the default.

  • S3 Endpoint and DNS-style bucket: Leave it blank for the default.

  • If the transfer tool is asking for Chunk-size, set it to something between 3MB and a maximum of 3.9GB, with 50MB as the recommended chunk size.

6. Depending on the region and other settings, the platform keeps your data container inside a particular S3 Bucket. All data containers in your instance may use the same S3 bucket or not. To obtain the Bucket Name associated to the data container you want to use to begin uploading any file see Getting your S3 bucket name.

And then use:

$ s3cmd put some-file.xml s3://libnova1234/6352/somefile.xml

some-file.xml -> s3://libnova1234/6352/somefile.xml  [1 of 1]
 123456 of 123456   100% in    2s    51.75 kB/s  done

The URL is formed using:

  • Protocol prefix ("s3://") to indicate the S3 protocol

  • S3 bucket: The S3 bucket in which your data container is in. See Getting your S3 bucket name to get it.

  • Data container identifier ("6352/") to indicate the data container to upload to

  • File path ("somefile.xml") to indicate the path and file name to upload to.

Using the API

You can also make an API call to upload files.

Using S3 for uploads is always recommended over using the API, as it is much more scalable, robust and fast. Use the API for uploads only when S3 can't be used.

Single-part upload processes are limited to 100MB files. You can upload larger files using the multi-part upload process.

1. Sign in to the Platform's Management Interface

2. Obtain your Platform's API key selecting your name and then Access Methods:

and then,

For single-part upload processes:

Use the following method:

 curl --request POST \
    --url "$your_platform_url/api/container/<target container id>/file/upload" \
    --header "authorization: Bearer $your_platform_api_key" \
    --header "content-type: multipart/form-data" \
    --form "fileName=DestinationFileName.txt" \
    --form "file=@LocalFileName.txt" \
    --form "path=/Desired_path/"
    

For multi-part upload processes:

Multipart chunks should be larger than 5MB (5242880 bytes). If not, you will get the following error during your upload process: EntityTooSmall (client): Your proposed upload is smaller than the minimum allowed size. Your proposed upload is smaller than the minimum allowed size.

When using multipart uploads, you first need to upload your first part and retrieve the tocken for subsequent upload processes.

For the first chunk, use:

 curl --request POST \
    --url "$your_platform_url/api/container/3/file/upload" \
    --header "authorization: Bearer $your_platform_api_key" \
    --header "content-type: multipart/form-data" \
    --form "chunkIndex=1" \
    --form "chunkCount=2" \
    --form "fileName=mytest2.txt" \
    --form "file=@mytest2.txt" \
    --form "path=/simple_upload/"

That will deliver you back the uploadId parameter:

that you will use for subsequent parts:

 curl --request POST \
    --url "$your_platform_url/api/container/3/file/upload" \
    --header "authorization: Bearer $your_platform_api_key" \
    --header "content-type: multipart/form-data" \
    --form "chunkIndex=2" \
    --form "chunkCount=2" \
    --form "fileName=mytest2.txt" \
    --form "file=@mytest2.txt.secondPart" \
    --form "uploadId=4j0BNWzY5jI4genBEgM1ReLKhQ3_PpBdVNcSsccQ5sEvSXPmNjL94wOZX2BBxs6Oq29bZ8YufE5JNkq_9fwL8yBL.dpBU3_M.rN8.JZU7aEHS6rSL9LCi_obwHwdmWMp" \
    --form "path=/simple_upload/"

Last updated