# Download content

**LIBSAFE Go** allows users to download content from the platform using several methods:

## Using the Management Interface

1. Locate the data container from which you want to download using the **Containers menu** section or by searching.
2. Select **Check-in** in case you are not checked in the container, and you have the check-in/out enabled for the data container.
3. In the data container page, choose **Explore content**:

![](/files/-Mgu7YFnnY4KWxJ2d_Tw)

1. Select the file you want to download, right click on it and select "Download". You can select multiple files or folders, and the platform will create a ZIP file (named in the same way the first selected file) with them and start downloading it.

![](/files/-MgjwWsTdcCswekA48bJ)

{% hint style="warning" %}
Note that certain limitations exist when downloading content using the browser:

* Unless you use one of the methods described in the Data Integrity section, no strong integrity is provided.
* For high volume uploads/downloads (by file number, size or both), the browser may be slower or not able to download your content. An S3 client is recommended.
  {% endhint %}

## Using the API

{% hint style="info" %}
API examples here are just illustrative. Check the API documentation for additional information and all available methods.
{% endhint %}

{% hint style="info" %}
The S3 protocol is the **recommended** way for you to upload or download content from the platform. It is the most performant, parallelizable and easy way to do it. Use the API only for small workloads and low concurrency.
{% endhint %}

1. Sign in to the platform's Management Interface
2. Obtain your API key by selecting your name and then Access Methods:

![](/files/-Mgu7YFq88Flx838lR34)

and then use the following method:

```bash
curl --request GET \
     --url "$your_platform_url/api/file/{your file ID}/download" \
     --header "Content-Type: application/json" \
     --header "authorization: Bearer $your_platform_api_key" \
     --data '{}' -L --output your_downloaded_file.txt
```

{% hint style="info" %}
When you make this request, the platform is going to 1) verify that you have read permissions for the file, 2) create a pre-signed download URL valid for 20 minutes and 3) send you a 301 redirect to the pre-signed URL back. **Make sure that you allow your script or tool to follow redirects (-L in curl).**
{% endhint %}

## Using **the** S3 protocol

{% hint style="info" %}
You can use many S3 compatible CLI tools or GUI tools, as available in your environment. Make sure you check the [Using S3 Browser](/libsafe-go/cookbook/using-s3-browser.md) guide, as every other tool is configured in a similar way.&#x20;

When using a CLI tool, we recommend for you to use the [AWS CLI with LIBSAFE Go](/libsafe-go/cookbook/aws-cli-with-libsafe-go.md)
{% endhint %}

1. Sign in to the platform's Management Interface
2. Click on your name and select **Access Methods**
3. In the **S3 compatible protocol** section, click **Regenerate**
4. Copy your **Access Keys** and **Secret Keys** and store them in a safe location *\*\**

![](/files/-MgjwyS5pQHGuMIy4Fuq)

{% hint style="info" %}
Please note that the Secret Key will only be displayed once. It is possible to regenerate a key, but the old key will be invalidated and any process that uses it will receive an "access denied" error.
{% endhint %}

1. Configure the AWS S3 CLI tool (or another S3 tool):

```
$ aws configure
AWS Access Key ID [None]: <your access key>
AWS Secret Access Key [None]: <your secret key>
Default region name [None]: (just press ENTER here for None)
Default output format [None]: (just press ENTER here for None)
```

{% hint style="info" %}
Use:

* **Access Key:** The one you obtained in the previous step.
* **Secret Key:** The one you obtained in the previous step.
* **Region:** Leave it blank for the default.
* **Output formats:** Leave it blank for the default.

Your S3 client may also ask for:

* **S3 Endpoint and DNS-style bucket:** Leave it blank for the default.
* **Chunk-size**, set it to something between 3MB and a maximum of 3.9GB, with 50MB as the recommended chunk size.
  {% endhint %}

1. Depending on the region and other settings, the platform keeps your data container inside a particular S3 Bucket. All data containers in your instance may use the same S3 bucket or not. To obtain the **Bucket Name** associated to the data container you want to use to begin uploading any file, see [Getting your S3 bucket name](/libsafe-go/cookbook/getting-your-s3-bucket-name.md).

Path to your files is created **using the following convention:**

```bash
s3://{S3 bucket name}/{container id}/{path to your file}
```

So you can use:

```
$ aws s3 cp s3://libsafes3bucket/5/myfile.jpg myfile.jpg
   dowload: s3://libsafes3bucket/5/myfile.jpg to ./myfile.jpg
```

{% hint style="info" %}

* **S3 bucket:** The S3 bucket in which your data container is located (libsafes3bucket in the example)
* **Data container identifier:** To indicate the data container to upload to (5 in the example)
* **File path:** To indicate the path and file name to upload to (myfile.jpg in the example).
  {% endhint %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.libnova.com/libsafe-go/get-started/download-content.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
