-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem with application status #195
Comments
I think some more details might be necessary to understand what the problem is in this case. Does the deployment rollout complete successfully at the Kubernetes level? How are the healthchecks for this application configured? Is it possible to create an example application configuration/application resource which can reproduce the issue? If the deployment rollout does not complete successfully, the healthchecks/readiness probe for the application might need to be adjusted. If the rollout completes successfully but takes longer than the ReadyCheck timeout because of external factors such as image pull or pod scheduling delays, one option could be to increase the
ReadyCheck uses |
Yes, the user used |
Is this a typo, or does the app need 120 minutes to become ready to serve requests? If so, you need to tell them to go back and build better apps. That kind of thing is not something that can or should be solved in the platform, that needs to be solved in application code. |
oh, yes, it was a typo, it was 120 seconds 😅 |
If you mean that it takes approximately 120 seconds before the readiness probe is successful (often or always), then it sounds to me like the application would benefit from a longer initial_delay_seconds on the readiness probe to accommodate the time it actually takes it to become ready. This should allow more time for the rollout to complete at the Kubernetes level, and would also increase the effective timeout in ReadyCheck. Does that seem like a reasonable solution, or is there a reason why readiness initial delay can't be increased? |
We just detected a problem with the application status result. We have a user that set a smaller readiness timeout than the liveness and the app is constantly finishing in a FAILED status. What do you think about getting the grater value between readiness and liveness?
fiaas-deploy-daemon/fiaas_deploy_daemon/deployer/kubernetes/ready_check.py
Line 39 in 727967d
The text was updated successfully, but these errors were encountered: