-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restarting the engine unconditionally resets the max_concurrency values #27
Comments
This is the only place that sets up the key, so naturally it's going to override the current value if one was set from a different client. I'd advocate for a second key that allows user to set their own limits, effectively as an override for the code's own limit. We can amend the Lua script so it looks for that key first and falls back to the real one. Users could potentially add a TTL on the second key to get the "temporary behaviour" action. |
- Leave already-set concurrency values alone, always. - Fix some unit tests that were throwing `InvalidJobSignatureError` errors - Fix incompatibilties with tox >= 4.0 Fixes: Issue NicolasLM#27
In fact I'd say let's add a tool script to do this, so that the user doesn't need to know about our Redis keys. Something like: |
Leave already-set concurrency values alone, always. Fixes: Issue NicolasLM#27
This one is a bit complicated, but:
Preamble
A test case script:
Steps to reproduce
docker-compose -f spinach/tests/docker-compose.yml up -d
redis-cli
and execute the commandHSET spinach/_max_concurrency nap 32
Expected result
The second run of the script runs all 32 tasks simultaneously, like we told it to
Actual behavior
The second run of the script processes the queue 8-at-a-time
Miscellany
set_concurrency_keys.lua
script explaining that it is done on purpose.max_concurrency
set to "baseline" values, and Operations occasionally has a need to tune the values up or down, for reasons.flask_spinach
running under Gunicorn; when Gunicorn cycles out workers, the new workers start and destroy any runtime adjustments to the Redis that operators may have made; and it does so with no warning._max_concurrency
key, but Issue Jobs without concurrency limits generate current_concurrency hash keys in redis #15 makes it not work very well.Any thoughts and insights on how to best address this would be appreciated.
The text was updated successfully, but these errors were encountered: