-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add images for celery, rabbitmq etc. for use in production #2
Comments
This would probably need a separate repo as it's doing a different job to the docker config in the CATMAID repo. In particular, we'd probably want to be more Docker-y, i.e. running a single service per container rather than treating it like a VM and having supervisor handle subprocesses. Therefore, we'd need Existing:
New:
Daphne and celery could be run using the same docker image as uwsgi, just with a different entry point. They can have a relatively simple set of environment variables as docker-compose applications are by default started on their own network bridge, so all the exposed ports can be identical for a whole cloud of compose applications and each would only differ by which ports they published, which could be set with an environment variable or With any luck, this will primarily be a case of refactoring the catmaid entry point script to split out the different elements and calling them with different Lastly, we'd want to roll the R environment into the container. This is currently a problem because of the rate limits for downloading all the required R packages from github (could any come from CRAN?), unless we also used a personal access token. Possible additional volumes:
|
I think it breaks down like this:
Volumes would be
Either the postgres or "startup" container would have CMD which could load a If we set up the R environment first (as python deps change more frequently) and use a catmaid org PAT in CI, we wouldn't duplicate any downloads and the rate limits should be fine. The layered file system should mean that quadruplicating catmaid images doesn't take up any extra space. The nginx container routes external requests to the 3 web images, static file directories, and so the host nginx should just have to route to the nginx container. Does this get complicated in that the internal django config needs to know the route used to reach the host nginx? Or does django just need to know the setup with the nginx container? I suppose it can be controlled with environment variables, so you'd just need a |
No description provided.
The text was updated successfully, but these errors were encountered: