-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WISH: environment variable for which port inside the container should be used #5
Comments
@JonBendtsen did you tried to remap your dolibarr container exposed port via docker-compose ? |
but it is not the exposed port I am looking to change, it is the one inside the container. Maybe this is podman specific. I put it all inside a pod, so all network inside the pod is on localhost / 127.0.0.1. Since Dolibarr binds to port 80 which would be on localhost inside the pod I can not run a 2. dolibarr inside the same pod, which I wanted to use for setup of the database, such that the production could be stopped until I have done the setup/upgrade/... So far I solved the issue by simply making 2 pods one for setup, and once that is done I'll start the live one. They also reside on different domains which I then use nginx as a proxy in front to get https. (mostly because I had issues getting proxying to work with /dolibarr/ and /setup/ respectively in the proxying) |
let me elaborate a little about my plan: I would on the outside of the pod publish 2 ports:
On the inside dolibarr has to run at a different port, because else the setup dolibarr container would respond to requests on the live production dolibarr container - because both use the same port number, port 80. Further out from this I have my nginx proxy. I would configure that to grant access to all my users for the live production, but only for me for the setup. So the plan was start the pod, start the database, start the setup container, do the setup securely, and while doing that the live production address would be unresponsive. Once I am satisfied with setup, then I stop (or perhaps evne remove the setup container) and create and start the live production container. I would use the same for when I need to do upgrades to a newer version and there might be some database changes that has to be made. |
I see what you mean, I don't know well podman. It could be quite overkill but maybe with a traefik type RP in front (using loadbalancer weight feature) ... but it'll probably be redundant with the one included with podman. I don't use Dolibarr for professionnal purpose I don't face this kind of issues (HA) ... ^_^ |
So far I circumvented the issue by create a full pod for setup and a full pod for live production. This seems to work just as well, maybe just taking up a few more resources than if I was only doing the stopping of 1 dolibarr container and then starting the other dolibarr container. With full pods I am stopping and starting:
That possibly might work, I'm also interested in trying https://apisix.apache.org/
I do use it for production, and I'm looking to scale! (i've made a dance class registration system based on Dolibarr and some other containers that uses the dolibarr API to do stuff. When I presented it at a dance camps this summer they all wanted a service, not DIY, so that is what I'm trying to setup. I use containers to avoid potential mixing of different organizers). |
I have little knowledge about Podman too but AFAIK the Pod notion is the same one as in Kubernetes (which I am quite accustomed to). I think using two dedicated Pods (one for the runtime, another one for the setup) as you said is definitely the way to go here. IMHO port mapping should not be the job of the container but rather the job of the container runtime. What do you think? |
I'm using podman exactly because of the pod similarity with Kubernetes, which I'll switch to if I need to SCALE.
I think we are almost talking about the same thing. The outside of the pod ( or container) are most likely handled by the container runtime, but some container images do support changing the port, like https://hub.docker.com/_/phpmyadmin which has an environment variable called
As I'm writing this I notice that both phpmyadmin and dolibarr use both php and apache - so maybe the method that phpmyadmin use to control the port that apache binds to inside the container can be copied. |
Well, I think that would be something reasonable to add if someone want for example to use a non privileged user to run the services (port 80 would not work here) and this will help for your use case as well. I think a PR adding a new WEB_PORT variable (defaulting to 80) could work. |
Oh yeah, using a non privileged user is also a benefit. But why not just use the same variable name that phpmyadmin uses? - possibly because it sounds very linked to apache, and I'm seeing other requests that talks about other webservers, and it would make sense to have the same variable name across all different dolibarr image versions, and for non apache it wouldn't make sense to call it apache :-D Final question - do you expect me to make a PR? because that will not happen at the moment due to other priorities. |
That's exactly why. When PHP-FPM/NGINX support happen APACHE_PORT would not make sense in this context. Hence the more general "WEB" term ;-P
That would be great if you can handle it of course. Otherwise maybe someone else will come up with it who knows :-) Cheers |
easily fixed with MR #21 |
Hi @JonBendtsen, I fail to see how port configuration is fixed by #21. Could you enlighten me please? Cheers |
sure. What I propose in that documentation fix is that people should just mount a new Apache configuration file on top and over of the one coming with the container image. Once you do that, you can change the port, and set the servername, and ... |
I wish I could use an environment variable for which port inside the container should be used for running dolibarr.
I would use this for upgrades like going from v19.0.3 to v20.0.0.
I'd shut down the container running v19.0.3 on port 80 and then start the v20.0.0 container and publish to a different port. This different port would then only allow me to connect to that port and thus to the v20.0.0 container. I can then do the necessary upgrades and once it is done I can shutdown the v20.0.0 upgrade container and start another v20.0.0 container running on port 80.
Should anyone try to access the port 80 while I am upgrading they won't see anything as that container would not be responding.
The text was updated successfully, but these errors were encountered: