-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache storage policy application results (Search
, Replicate
)
#2892
Conversation
Signed-off-by: Leonard Lyubich <[email protected]>
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #2892 +/- ##
==========================================
+ Coverage 23.67% 23.73% +0.05%
==========================================
Files 775 775
Lines 44908 44933 +25
==========================================
+ Hits 10633 10664 +31
+ Misses 33420 33417 -3
+ Partials 855 852 -3 ☔ View full report in Codecov by Sentry. |
be46f4c
to
13a115d
Compare
Also, the last commit says it was tried to be done unsuccessfully before, can you, please, provide more info about it (to the commit)? |
265dc5e
to
b9583a4
Compare
done |
b9583a4
to
701dba8
Compare
Application result of container (C) storage policy to the network map (N) does not change for fixed C and N. Previously, `Search` and `Replicate` object server handlers always calculated the list of container nodes from scratch. This resulted in excessive node resource consumption when there was a dense flow of requests for a small number of containers per epoch. The obvious solution is to cache the latest results. A similar attempt had already been made earlier with 9269ed3, but it turned out to be incorrect and did not change anything. As can be seen from the code, the cache was checked only if the pointers of the received network map and the last processed one matched. The latter was never set, so there were no cache callsю This adds a caching component for up to 1000 recently requested lists of container nodes. By increasing the amount of memory retained, the component will mitigate load spikes on a small number of containers. The volume limit of 1000 was chosen heuristically as a first approximation. Tests on the development environment showed a pretty good improvement, but results on real load tests are yet to be obtained. Based on this, similar optimization for other layers and queries will be done later. Refs #2692. Signed-off-by: Leonard Lyubich <[email protected]>
701dba8
to
10d05a4
Compare
code and unit tests are ready, intergration tests pass. I also want to take a memory profile in devenv
i created 5 containers and ran 30 routines sending 1000 search queries (each time to random container)
v0.42.1
current branch
Go script
Potential
besides obvisuly required implementing the same approach to other components/RPC which is gonna TBD in next PRs, i got following thoughts: