-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generate Raptor transfer cache in parallel #6326
base: dev-2.x
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## dev-2.x #6326 +/- ##
=============================================
+ Coverage 69.79% 69.81% +0.01%
- Complexity 17798 17827 +29
=============================================
Files 2019 2019
Lines 76126 76250 +124
Branches 7786 7803 +17
=============================================
+ Hits 53132 53233 +101
- Misses 20288 20302 +14
- Partials 2706 2715 +9 ☔ View full report in Codecov by Sentry. |
I think this is going to reduce the throughput slightly. The issue is that one request that require new transfers to be generated will steel processor time from other requests. I am not sure how the affect memory fetches, but It might have a negative effect on running trip searches - at least if a planning request is swapped out in favour of calculating transfers. The threads also loose log-trace-parameters-propagation and graceful timeout handling. The parrallel procecing at least need to be feature enabled using |
There is also a possibility that these are only computed in parallel before start up but not after server is running. I don't know whether this code is used for both cases or not. |
I specifically need it to compute in parallel in order to make our response time down from 4 minute to 1 minute. |
Only check the feature flag during run time, not during start-up. |
Summary
This makes the Raptor cache generating process run in parallel.
Our GB-wide deployment pre-caches 4 configurations on startup. Before applying this fix, it takes 16 minutes to cache 4 configurations for the whole GB on a 16-core machine:
After applying this patch, it only takes 5 minutes:
Also, the journey planning response time for a new configuration has been reduced correspondingly from more than 4 minutes to around 1.5 minutes.
Issue
#6312
Unit tests
None. This is a performance improvement only with no externally visible change.
Documentation
N/A
Changelog
Bumping the serialization version id
Not needed