Upload content
Last updated
Last updated
The platform allows users to upload content to the platform using several methods.
1. Locate the data container you want to upload to using the Containers menu section or by searching.
2. Select Check-in in case you are not checked in the container and you have the check-in/out enabled for the data container.
3. In the data container page, choose Explore content:
4. Drag and drop the files you want to upload to the files and folders area:
Note that certain limitations exist when uploading content using the browser:
Unless you use one of the methods described in the Data Integrity section, no strong integrity is provided.
For high volume uploads/downloads (by file number, size or both), the browser may be slower or not able to upload your content.
You can use many S3 compatible CLI tools or GUI tools, as available in your environment. Make sure you check the Using S3 Browser guide, as every other tool is configured in a similar way. When using a CLI tool, we recommend for you to use the AWS CLI with LABDRIVE
When uploading using this method, make sure the client is using integrity verification on upload and/or use any of the other methods described in the Data Integrity section.
1. Sign in to the Platform's Management Interface
2. Click on your name and select Access Methods
3. In the S3 compatible protocol section, click Regenerate
4. Copy your Access Keys and Secret Keys and store them in a safe location. Note that more than one set of credentials can exist.
Please note that the Secret Key will only be displayed once. It is possible to regenerate a key, but the old key will be invalidated and any process that uses it will receive an "access denied" error.
5. Configure your preferred S3 CLI tool. The following example uses s3cmd:
Use:
Access Key: The one you obtained in the previous step.
Secret Key: The one you obtained in the previous step.
Region: Leave it blank for the default.
S3 Endpoint and DNS-style bucket: Leave it blank for the default.
If the transfer tool is asking for Chunk-size, set it to something between 3MB and a maximum of 3.9GB, with 50MB as the recommended chunk size.
6. Depending on the region and other settings, the platform keeps your data container inside a particular S3 Bucket. All data containers in your instance may use the same S3 bucket or not. To obtain the Bucket Name associated to the data container you want to use to begin uploading any file see Getting your S3 bucket name.
And then use:
The URL is formed using:
Protocol prefix ("s3://") to indicate the S3 protocol
S3 bucket: The S3 bucket in which your data container is in. See Getting your S3 bucket name to get it.
Data container identifier ("6352/") to indicate the data container to upload to
File path ("somefile.xml") to indicate the path and file name to upload to.
You can also make an API call to upload files.
Using S3 for uploads is always recommended over using the API, as it is much more scalable, robust and fast. Use the API for uploads only when S3 can't be used.
Single-part upload processes are limited to 100MB files. You can upload larger files using the multi-part upload process.
1. Sign in to the Platform's Management Interface
2. Obtain your Platform's API key selecting your name and then Access Methods:
and then,
Use the following method:
Multipart chunks should be larger than 5MB (5242880 bytes). If not, you will get the following error during your upload process: EntityTooSmall (client): Your proposed upload is smaller than the minimum allowed size. Your proposed upload is smaller than the minimum allowed size.
When using multipart uploads, you first need to upload your first part and retrieve the tocken for subsequent upload processes.
For the first chunk, use:
That will deliver you back the uploadId parameter:
that you will use for subsequent parts: