Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

timeout does not seem to work #17

Open
kjappelbaum opened this issue Nov 9, 2021 · 10 comments
Open

timeout does not seem to work #17

kjappelbaum opened this issue Nov 9, 2021 · 10 comments

Comments

@kjappelbaum
Copy link
Contributor

@lpatiny reported that there seem to be issues with the timeout

@kjappelbaum
Copy link
Contributor Author

Screen Shot 2021-11-09 at 09 38 49

hmm ... i can provoke timeouts

@lpatiny
Copy link

lpatiny commented Nov 11, 2021

This morning I have again a lot of processes running

image

I will send you the full log by PM.

@lpatiny
Copy link

lpatiny commented Nov 11, 2021

Maybe the could be a route that shows the currently running processes and when they started ? Also are we sure that when there is a time out the process that makes the calculations is really killed ?

@kjappelbaum
Copy link
Contributor Author

kjappelbaum commented Nov 11, 2021 via email

@lpatiny
Copy link

lpatiny commented Nov 11, 2021

For information here is the list of processes currently running

root     16673  0.0  0.0   2376   504 ?        Ss   Nov09   0:00 /bin/sh -c gunicorn -w 4 xtbservice.xtbservice:app -b 0.0.0.0:$PORT -k uvicorn.workers.UvicornWorker
root     16743  0.0  0.0  31448 21728 ?        S    Nov09   0:20 /opt/conda/bin/python /opt/conda/bin/gunicorn -w 4 xtbservice.xtbservice:app -b 0.0.0.0:8091 -k uvicorn.workers.UvicornWorker
root     16854  0.0  0.1 4832068 121324 ?      Sl   Nov09   1:41 /opt/conda/bin/python /opt/conda/bin/gunicorn -w 4 xtbservice.xtbservice:app -b 0.0.0.0:8091 -k uvicorn.workers.UvicornWorker
root     16855  0.0  0.1 5366344 127272 ?      Sl   Nov09   1:42 /opt/conda/bin/python /opt/conda/bin/gunicorn -w 4 xtbservice.xtbservice:app -b 0.0.0.0:8091 -k uvicorn.workers.UvicornWorker
root     16864  0.0  0.1 6498408 147200 ?      Sl   Nov09   1:46 /opt/conda/bin/python /opt/conda/bin/gunicorn -w 4 xtbservice.xtbservice:app -b 0.0.0.0:8091 -k uvicorn.workers.UvicornWorker
root     24088 2330  0.1 3936608 105468 ?      Rl   Nov10 27220:13 /opt/conda/bin/python /opt/conda/bin/gunicorn -w 4 xtbservice.xtbservice:app -b 0.0.0.0:8091 -k uvicorn.workers.UvicornWorker
root     25707  0.0  0.0 112684   996 pts/4    S+   06:11   0:00 grep --color=auto gunicorn
root     25824  0.0  0.1 6176272 137140 ?      Sl   Nov10   0:51 /opt/conda/bin/python /opt/conda/bin/gunicorn -w 4 xtbservice.xtbservice:app -b 0.0.0.0:8091 -k uvicorn.workers.UvicornWorker
root     27536  0.0  0.0   2376    32 ?        Ss   Sep17   0:00 /bin/sh -c gunicorn -w 2 --backlog 16 dimensionality_reduction.dimensionality_reduction:app -b 0.0.0.0:$PORT -k uvicorn.workers.UvicornWorker
root     28464  0.0  0.0  33088  2344 ?        S    Sep17  10:52 /usr/local/bin/python /usr/local/bin/gunicorn -w 2 --backlog 16 dimensionality_reduction.dimensionality_reduction:app -b 0.0.0.0:14101 -k uvicorn.workers.UvicornWorker
root     30893  0.1  0.0 2567200 50356 ?       Sl   Sep17  88:04 /usr/local/bin/python /usr/local/bin/gunicorn -w 2 --backlog 16 dimensionality_reduction.dimensionality_reduction:app -b 0.0.0.0:14101 -k uvicorn.workers.UvicornWorker
root     30895  0.1  0.0 2788388 54800 ?       Sl   Sep17  87:45 /usr/local/bin/python /usr/local/bin/gunicorn -w 2 --backlog 16 dimensionality_reduction.dimensionality_reduction:app -b 0.0.0.0:14101 -k uvicorn.workers.UvicornWorker

@kjappelbaum
Copy link
Contributor Author

Maybe the could be a route that shows the currently running processes and when they started ?

typically this is nothing that i'd put in the service code

@kjappelbaum
Copy link
Contributor Author

when there is a time out the process that makes the calculations is really killed ?

this is the only thing i can imagine now. That there is still some thread remaining

@kjappelbaum
Copy link
Contributor Author

and you can also add cpu limit in the docker-compose to avoid that you manually need to restart https://docs.docker.com/compose/compose-file/compose-file-v2/#cpu-and-other-resources. Maybe it is easier if you just use v2

@kjappelbaum
Copy link
Contributor Author

i'll push an example with those settings
Screen Shot 2021-11-11 at 08 04 45

for me, locally, i couldn't find a way to run out of resources

@kjappelbaum
Copy link
Contributor Author

Screen Shot 2021-11-11 at 08 07 48

so i really started a lot concurrent requests

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants