-
-
Notifications
You must be signed in to change notification settings - Fork 168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
File Permalinks throw 404 if UUID cache is empty #5181
Comments
So far this is intentional, we can debate though if it's the right call we made: Since looking up UUIDs without the cache means traversing through the whole site index, especially for large sites allowing to trigger this action via a permalink request seemed to be an easy vector for DDOS attacks. Which is why we made the lookup for permalink lazy, only from the cache. |
I understand the decision behind that. Given that, it should probably still be resolved in some way when e.g. the entire cache is empty. Right now the behavior doesn't work for any site where the cache folder is non-persistent, which will probably be the case for most "modern" deploy setups. If it's actually a problem, make it a config option so large sites can turn it off if needed. |
As a quick fix, I would suggest that you repopulate the UUID cache after deployment. |
just my 2c. for autoid i was only running the indexing if the current index count was 0. |
What Bruno wrote could be an option. Then the DoS/DDoS potential is very low. But we might need a mechanism to ensure that only one PHP process can start the time-intensive indexing process at a time. |
@lukasbestle is there a good way to check how many entries one of our caches has? |
Not a way that works with all cache drivers. But we could do it something like this:
|
@lukasbestle to be honest this sounds like quite the hack/workaround though, trying to clean up behind developer behavior. doesn't feel too good to me. I see it two ways:
|
I get your points. It's not a fully clean solution. However I think we cannot fully put the blame on developers. A cache should always be ephemeral (= clearing the cache shouldn't be destructive). This is already mostly the case as the UUID cache can be rebuilt from the content files, but with the permalink feature Kirby suddenly does depend on the cache and the feature doesn't work without it. A cache can not only be missing if the developer clears it, but also just if a site gets deployed. Some hosting setups (especially cloud-based ones) don't have permanent file storage. In these cases, it's not even possible to keep the cache across deployments, the site is always deployed fresh. So the dev would need to actively build the cache on deployment. Which is of course possible, but an extra step for deploying Kirby and at least something we need to document more clearly.
I wouldn't do that TBH. If devs enabled that option, they would automatically introduce an application layer DDoS vulnerability into their site with no way to protect against. So if we cannot solve this in the core by rebuilding the index on an empty cache, I think the only solution is to document the behavior. |
But then it sounds like our permalink implementation is basically wrong. If a cache cannot be relied upon we can't have a feature that relies on the cache. To prevent attacks, can we instead throttle/limit the lookup from index? |
I don't think the implementation is wrong. It's just a clash of two different requirements that cannot be solved. To throttle the lookup, we again need to store a timestamp or something in the cache. At which point we could also go the route I mentioned above, which I think is more reliable. |
I am experiencing this problem, is there a way to repopulate the UUID cache? |
the official cli can do it https://github.com/getkirby/cli
which is the same as calling this line of PHP code
|
Thanks @bnomei, very much appreciated! |
Hi there, I've been struggling with this pretty hard. A client has a website with many file downloads, and we used the Writer field link feature to link the files. Which works as a permalink. Because it is a very large website with many pages, the client demands to cache the website. But links don't work at all if you turn on the cache, because the 404 page becomes cache for that link, I suppose.
What can I do about it? Is there any suggested solution? Going through 500 hundred website pages and changing links to absolute links would take a lot of time and it will cost of course. Unfortunately, I didn't find any warnings about permalinks and UUIDS, so I thought this was a well-tested feature. Thank you in advance. Kirby version 4.2 |
UUID/permalink cache should not be cleared when the content changes |
Hi @tobimori, thank you, that's good to know. But why file permalinks don't work? Ok, I tried now with page resave, and they started working. Hmm, strange... |
iirc saving a page should trigger population of the UUID cache, and then all files permalinks should work. the issue should just happen when manually deleting the folder/cache. |
Sorry, ignore my comments. All is good with permalinks on my end. Without telling me a client created a script that deletes cache once a day, but he set up a full cache deletion. Now we fixed that. |
Description
A client of mine uses the newly added UUID Permalinks for sending file links to customers. When a new deployment of the site happens, we flush the complete cache, including the UUID cache. (essentially
rm -rf
the folder)Now, the non-presence of the UUID folder will make all Permalinks fail as 404, until they are created again. As the media folder URLs are prone to change, we need to use the permalinks.
Expected behavior
Permalinks need to also work when the UUID cache hasn't been generated yet, which means that any access of a permalink route should also generate the UUID cache.
To reproduce
I have not tested this with Pages, but I suspect the behavior to be the same.
Your setup
Kirby Version 3.9.4
The text was updated successfully, but these errors were encountered: