# Upload content

The platform allows users to upload content to the platform using several methods:

## Using the Management Interface

1. Locate the data container you want to upload content to using the **Containers menu** section or by searching.
2. Select **Check-in** in case you are not checked in the container, and you have the check-in/out enabled for that data container.
3. In the data container page, choose **Explore content**:

![](https://1695872106-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MgBifM53hLTD0q9p4fr%2F-MggXAsg3-J-lQH7NyS9%2F-MggZ4LR19q5yl--vx2Z%2Fimage.png?alt=media\&token=e314d2ce-c1db-4a24-9f4c-a547fc23fe0c)

1. Drag and drop the files you want to upload to the files and folders area:

![](https://1695872106-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MgBifM53hLTD0q9p4fr%2F-MggXAsg3-J-lQH7NyS9%2F-MggZ9qXTC3gXb9vDS1o%2Fimage.png?alt=media\&token=9381304d-63f1-40b8-a962-b1b0c8cae6a4)

{% hint style="warning" %}
Note that certain limitations exist when uploading content using the browser:

* Unless you use one of the methods described in the Data Integrity section, no strong integrity is provided.
* For high volume uploads/downloads (by file number, size or both), the browser may be slower or not able to upload your content.
* Empty folders cannot be uploaded. If there are no objects inside a folder, the interface will only upload those folders containing files.&#x20;
  {% endhint %}

## Using an S3-compatible tool

{% hint style="info" %}
You can use many S3 compatible CLI tools or GUI tools, as available in your environment. Make sure you check the [Using S3 Browser](https://docs.libnova.com/libsafe-advanced-manual/cookbook/using-s3-browser) guide, as every other tool is configured in a similar way.&#x20;

When using a CLI tool, we recommend for you to use the [AWS CLI with LIBSAFE Advanced](https://docs.libnova.com/libsafe-advanced-manual/cookbook/aws-cli-with-libsafe-go)
{% endhint %}

When uploading using this method, make sure the client is using integrity verification on upload and/or use any of the other methods described in the [Data Integrity](https://docs.libnova.com/libsafe-advanced-manual/get-started/upload-content) section.

1. Sign in to the platform's Management Interface
2. Click on your name and select **Access Methods**
3. In the **S3 compatible protocol** section, click **Regenerate**
4. Copy your **Access Keys** and **Secret Keys** and store them in a safe location. Note that more than one set of credentials can exist. *\*\**

![](https://1695872106-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MgBifM53hLTD0q9p4fr%2F-MggZLSK48E3__IvW7Pz%2F-MggZZslkVNNej5sjRrV%2Fimage.png?alt=media\&token=3a9ec313-014e-4ca7-812a-4395f88e89ca)

{% hint style="info" %}
Please note that the Secret Key will only be displayed once. It is possible to regenerate a key, but the old key will be invalidated and any process that uses it will receive an "access denied" error.
{% endhint %}

1. Configure the AWS S3 CLI tool (or another S3 tool):

```
$ aws configure
AWS Access Key ID [None]: <your access key>
AWS Secret Access Key [None]: <your secret key>
Default region name [None]: (just press ENTER here for None)
Default output format [None]: (just press ENTER here for None)
```

{% hint style="info" %}
Use:

* **Access Key:** The one you obtained in the previous step.
* **Secret Key:** The one you obtained in the previous step.
* **Region:** Leave it blank for the default.
* **Output formats:** Leave it blank for the default.

Your S3 client may also ask for:

* **S3 Endpoint and DNS-style bucket:** Leave it blank for the default.
* **Chunk-size**, set it to something between 3MB and a maximum of 3.9GB, with 50MB as the recommended chunk size.
  {% endhint %}

1. Depending on the region and other settings, the platform keeps your data container inside a particular S3 Bucket. All data containers in your instance may use the same S3 bucket or not. To obtain the **Bucket Name** associated to the data container you want to use (to begin uploading any file) see [Getting your S3 bucket name](https://docs.libnova.com/libsafe-advanced-manual/get-started/upload-content).

A path to your files is created **using the following convention:**

```bash
s3://{S3 bucket name}/{container id}/{path to your file}
```

So you can use:

```
$ aws s3 cp myfile.jpg s3://libsafes3bucket/5/myfile.jpg
   upload: ./myfile.jpg to s3://libsafes3bucket/5/myfile.jpg
```

{% hint style="info" %}

* **S3 bucket:** The S3 bucket in which your data container is in (libsafes3bucket in the example)
* **Data container identifier:** To indicate the data container to upload to (5 in the example)
* **File path:** to indicate the path and file name to upload to (myfile.jpg in the example).
  {% endhint %}
