Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(WIP) add --skip-existing/WHITENOISE_SKIP_EXISTING option to run faster #295

Closed
wants to merge 1 commit into from

Conversation

PetrDlouhy
Copy link

I have tried to solve the slow compression of large number of files (#279) by a bit different approach.

I use Docker to build my project. I can make use of Docker layer caching by first running collectstatic without project files (with slightly modified settings file). Then I copy all project files and run the collectstatic once again.
If the collectstatic don't compress already existing files in the second run, the run is quick and the first run is cached as long as I don't change settings file.

I think, that similar approach would be possible to use also in other cases - except there would need to be some mechanism to determine which files did change.
It could be enough to store the hash of the original files to separate file.

I would be glad for any thoughts on this approach. If it shows to be the right approach, I can make this to proper PR with documentation and other.

@@ -65,6 +67,10 @@ def log(self, message):
pass

def compress(self, path):
skip_existing = getattr(settings, "WHITENOISE_SKIP_EXISTING", False)
if (self.skip_existing or skip_existing) and os.path.isfile(path + ".br") and os.path.isfile(path + ".gz"):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn’t the file checks be dependent on self.use_brotli / self.use_gzip?

@PetrDlouhy
Copy link
Author

@merwok You are absolutely right, that would not work, if user has one of the files disabled. Thank you for your feedback.

Although the more I think about it, the more I think, that it would be better to store the files hashes and compress only if the file hash changes. That way it could be default behavior and all users would benefit from it.

@PetrDlouhy
Copy link
Author

Closing this in favor of #296

@PetrDlouhy PetrDlouhy closed this Dec 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants