Tips for faster uploads
Last updated
Was this helpful?
Last updated
Was this helpful?
LABDRIVE offers a huge scaling capability when handling file uploads and downloads. While the downloads scale without limitation and without any particular approach, there are some recommended techniques for uploading content faster when you plan to upload volumes over 10TB.
Three main elements contribute to a faster uploads. Combine them all to obtain the best performance:
Upload tool in use
Parallelization of uploads
Upload prefixes and containers
You can use any S3-compatible tool to upload content to your Data Containers, but to get the best results, we recommend the use of the most recent version of the . Unlike other tools, it has been optimized for parallelization.
Some Linux distributions install older versions of the client by default. Make sure you are always using the last version.
You can use the guide for examples on how to use this tool.
If you are planning to transfer a large amount of files, with relatively small size each, you can benefit a lot of parallelizing multiple upload processes.
The AWS CLI tool already have some built-in parallelization, but to achieve better results, you can launch multiple processes in parallel.
Amazon S3 supports a request rate of per second per prefix in a bucket. The resources for this request rate aren't automatically assigned when a prefix is created. Instead, as the request rate for a prefix increases gradually, Amazon S3 automatically scales to handle the increased request rate.
If there is a fast spike in the request rate for objects in a prefix, Amazon S3 might return 503 Slow Down errors while it scales in the background to handle the increased request rate. To avoid these errors, you can:
Distribute objects and requests across multiple folders or containers, as the limit stated above is "per folder".
Configure your application to gradually increase the request rate and , and/or