Optimize using the library with Nginx as a proxy to AWS S3 #73
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hello,
Many years ago I made some changes for the sole purpose of integrating this module into both personal and professional projects that are running on Amazon Web Services and using S3 as a backend for the media assets.
This module helped me to build a secured yet scalable and efficient assets storage by proxyfing S3 with Nginx.
The applications are offloading data transfers to Nginx thanks to X-Sendfile header.
In such a setup, trivial operations such as computing mime type or file size requires unnecessary roundtrips (and data transfer) from S3 to the servers. I investigated the issue and found a neat way to optimize such a setup.
Basically I had two means to optimize the storage :
Actually there two projects running in production (Django and Flask) with this module (my version of it).
I would be glad to contribute to this repository and offer the opportunity to other users to use these features ...
Thank you for reviewing the changes.
Best Regards,
David Fischer