-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reassess default kytos.conf
api
thread pool size and the related api_concurrency_limit
#489
Comments
Did some testing, and here are the results: Test Case 1Test Environment:
Test Case 2Notes: Previous tests clearly weren't hitting it hard enough, stepping it up a bit here. Test Environment:
Test Case 3Test Environment:
ConclusionTo me it seems that having around 1 API thread per 2 EVCs is near the tipping point of stability with this. I would err towards 2 API threads for every 3 EVCs. So assuming we have a system with 400 EVCs, then it should be around 260 API threads. |
Great results and data points @Ktmi. It'll serve well as current and future reference. Did you also have a chance to see if increasing the api thread pool size if it's consuming too much extra RAM or not much? Can we settle on a As know, and for other to be aware too, we currently have a bit of extra RAM consumption #404 that will be investigated in the future, but if we're not adding too much it's OK |
@viniarck Checking the RAM usage, it doesn't appear to use much ram. For comparison running at 512 API threads,, with the 800 EVCs, results in using around ~370MB of ram, with a total of around 860 kernel threads. Running with 256 API threads with 800 EVCs results in using around ~360 MB of ram, with a total of 720 kernel threads. |
Cool. Good to know. Let's go for |
This is for reassessing default
kytos.conf
api
thread pool size and the relatedapi_concurrency_limit
.Current potential problems:
The current default 160 has been tweaked once, but I'm not sure for how many EVCs and network convergence were tested at the time. We know that
mef_eline
,flow_manager
andtelemetry_int
are the most request intensive ones, so this should be reassessed with a scenario that is not too massively large but still realistic to a production network, for instance 360 EVCs in place (so, maybe leave some buffer and go for 400 EVCs).We can also consider keeping the default, and only documented default recommended values. We'd like to have a somewhat reasonable default value that won't need too much tweak out of the box, while still with a value that doesn't cause other issues, we need to find a balance.
When it API concurrency gets limited it returns 503 contributing to stability, however currently there are other open bugs that lead to major issues such as:
kytos-ng/mef_eline#483
kytos-ng/mef_eline#485
These related bugs will need to be fixed too, but we need to review if
api_concurrency_limit
needs to be reajusted and/or document recommended values for a given EVC scalability.cc'ing @Ktmi and @italovalcy
The text was updated successfully, but these errors were encountered: