get_paginated_jobs
, get_paginated_uploads
: merge, cache job outcome stat processing
#4316
+366
−56
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The logic is effectively repeated between the two functions - take
advantage of this to make a common implementation that conditionally redis-caches the result. The thinking here being that once all a job's notifications are in a "completed" status they are unlikely to change again and we can save ourselves from scanning the notifications table again for up to 100k rows per job. Being able to share this cache between the
get_paginated_jobs
andget_paginated_uploads
is a bonus.Depends on alphagov/notifications-utils#1179
I'm aware that this does some slightly out-of-character things for the api code. Firstly it's using
RequestCache
a little bit beyond it's nominative purpose - but it's the redis-caching decorator we have in our codebase and creating another one along with all the required tests doesn't seem like it would be a good use of effort. Also renaming and/or significantly reworking the class would require a breaking utils change that I'd rather avoid.This is the first addition of caching into
dao/
(in other places caches are cleared, but not set or read from). This is why I've given it a slightly stupid name so it can't be overlooked that this might be fetching from the cache - developers' normal expectations would probably be that the dao is directly accessing the database.Other than that there didn't seem to be a great other place to put a function that would be shared between both jobs and upload views 🤷