-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The framework default Gunicorn configs do not make sense for Gen2 environments #241
Comments
Any updates? |
Hi @xSAVIKx, thank you for the suggestion. If I understand correctly, your concern is that there will be only a single worker per function instance, which would be a bottleneck and wasteful for higher vCPU counts. You are correct, currently there is no way to tune this for Cloud Functions. I will mark the ability to configure worker count as an Enhancement request. Other ways you can configure your Function deployment: The number of instances scale up and down based on need with lower and upper bounds being something you can configure. See: https://cloud.google.com/functions/docs/configuring/min-instances. The reason the thread setting is 1024 is because the concurrency feature, if turned on, allows many requests to be served by a single instance. See: https://cloud.google.com/functions/docs/configuring/concurrency Perhaps you could try playing around with the about controls for performance tuning. As a reminder, Cloud Functions is meant to be a more managed offering with some controls unavailable and vCPU as purposefully an abstraction. If you still need more control, you may find it with Cloud Run - for example, its docs have a section about optimizing Python applications and Gunicorn: https://cloud.google.com/run/docs/tips/python#optimize_gunicorn Let me know if that helps! |
Hi @HKWinterhalter. Thanks for the explanations, but please clarify a couple of things. A Function deployment still has vCPU/RAM resources available to each Functions server (instance), right? Are you saying that if I pick a gen2 GCF with 8 vCPU, it will use all 8 vCPUs with the current Functions Framework setup? Or are these not really CPUs but allocations of a single CPU with some GHz then? |
After further investigation, your initial understanding was correct. I am marking this and your suggestions as an enhancement request. (I have also editted my original response as to not confuse future readers). Thank you! |
@HKWinterhalter is it gonna be OK if I create a PR to impelement support of custom Gunicorn configs? Will someone have time to review and release it then? |
@xSAVIKx I believe you can configure the Gunicorn params through env variables now |
Hey, so we're extensively using the framework to create our GCF and Cloud Run services and it is very easy to use which is great.
But, when it comes to fine-tuning your performance things go rogue. E.g. we have a need for ~5 GB RAM in Cloud Run for some services, so effectively are 2 vCPUs already while the hard-coded gunicorn configs force only a single worker
functions-framework-python/src/functions_framework/_http/gunicorn.py
Lines 20 to 27 in 586cc4e
And this is gonna be the same issue with GCF gen2 where you can use bigger instances in terms of memory but they also come with more vCPU for which you are paying but not using.
I suggest we either keep the default and let people configure/adjust Gunicorn params or implement a smart solution for picking appropriate defaults (at least based on the number of vCPU available).
The text was updated successfully, but these errors were encountered: