Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker + Docker Compose #2

Closed
brettwilcox opened this issue Jan 4, 2021 · 89 comments
Closed

Docker + Docker Compose #2

brettwilcox opened this issue Jan 4, 2021 · 89 comments
Labels
enhancement New feature or request High Priority These are the most important issues

Comments

@brettwilcox
Copy link

Docker + Docker Compose

The project deployment needs to be simplified with a docker and docker-compose structure. We need a pull request to address adding this as a first class support model.

@brettwilcox
Copy link
Author

I'm looking to use this as an "on-demand" platform for provisioning live video streams between students and teachers. It would be nice if we can get a terraform deployment together as well.

scorpion/lms#50

@NuroDev
Copy link

NuroDev commented Jan 4, 2021

I agree, a Docker Compose stack would greatly simplify the setup for the project.
If this was converted into a mono repo (Taking all 3 projects (Rust, Go & React) and putting them into one repository) would mean you could run a command as simple as the one below to get started:

git clone https://github.com/GRVYDEV/Project-Lightspeed.git && \
cd Project-Lightspeed && \
docker-compose up -d

The docker compose file can then just be placed in the project root and link to the Dockerfiles located in each projects directory. EG:

version: '3'

services:
    ingest:
        build:
            context: ./Lightspeed-ingest
            dockerfile: Dockerfile
        image: GRVYDEV/lightspeed_ingest:latest
        container_name: lightspeed_ingest
        ports:
            - 8084:8084
     webrtc:
        build:
            context: ./Lightspeed-webrtc
            dockerfile: Dockerfile
        image: GRVYDEV/lightspeed_webrtc:latest
        container_name: lightspeed_webrtc
        ports:
            - 8080:8080
        command: --addr=XXX.XXX.XXX.XXX # Or an env variable
     react:
        build:
            context: ./Lightspeed-react
            dockerfile: Dockerfile
        image: GRVYDEV/lightspeed_react:latest
        container_name: lightspeed_react
        ports:
            - 80:80 # Will rely on adding a `serve` command

@brettwilcox
Copy link
Author

brettwilcox commented Jan 4, 2021

@NuroDev Yup, we can use this as the "builder repo" or we could merge the projects into a mono repo.

If we keep them separate like they are now, all we would need to do is compile the binary and pull the docker hub images. I opened this issue to address - GRVYDEV/Lightspeed-ingest/issues/15

@brettwilcox
Copy link
Author

brettwilcox commented Jan 4, 2021

Here are the ports per - GRVYDEV/Lightspeed-ingest/issues/15

Ingest - 8084:8084
WebRTC - 65535:65535 && 8080:8080
React 80:80

@NuroDev
Copy link

NuroDev commented Jan 4, 2021

As I put in the example docker-compose.yml file above, alot of the stuff in it is ready to go. Just need to create the individual Dockerfile's for each project.

But it should be noted we will need a basic web server for React frontend UI. Also I think it would be better to set the webtrc address via an environment variable rather than a command.

@Janhouse
Copy link

Janhouse commented Jan 4, 2021

The golang service is not particularly docker friendly. You have to specify listen IP for it but it expects external IP. By passing internal IP, that IP is then sent to the client (web browser), and obviously browser can't access some random docker internal IP over internet.

This is a bit rough but it launches and accepts streams, just delivering them to the browser is problematic, but that should be an easy fix.

docker-compose.yml:

version: '3.3'
services:
    lightspeed:
      build: .
      command: /lightspeed/run.sh
      restart: always
      environment:
        - EXTERNAL_HOSTNAME=example.com:8080
      ports:
        - "80:80"
        - "8084:8084"
        - "8080:8080"
        - "65535:65535/udp"

Dockerfile:

FROM ubuntu:20.04
ARG DEBIAN_FRONTEND=noninteractive
ENV TZ=UTC
RUN apt-get update && \
  apt-get install -y \
  golang \
  rustc \
  nodejs \
  npm \
  git
RUN mkdir -p /lightspeed && \
  cd /lightspeed && \
  git clone https://github.com/GRVYDEV/Lightspeed-ingest.git && \
  cd Lightspeed-ingest && \
  cargo build --release
RUN mkdir -p /lightspeed && \
  cd /lightspeed && \
  git clone https://github.com/GRVYDEV/Lightspeed-webrtc.git && \
  cd Lightspeed-webrtc && \
  GO111MODULE=on go build
RUN mkdir -p /lightspeed && \
  cd /lightspeed && \
  git clone https://github.com/GRVYDEV/Lightspeed-react.git && \
  cd Lightspeed-react && \
  npm install && \
  npm install -g serve && \
  sed -i "s|export default|export|g" src/wsUrl.js && \
  sed -i -e '$a export default url;' src/wsUrl.js && \
  npm run-script build
COPY run.sh /lightspeed/run.sh

run.sh:

#!/bin/bash
find /lightspeed/Lightspeed-react/build/ -name "main*.js" -exec sed -i "s|stream.gud.software:8080|$EXTERNAL_HOSTNAME|g" {} \;

/lightspeed/Lightspeed-ingest/target/release/lightspeed-ingest &
/lightspeed/Lightspeed-webrtc/lightspeed-webrtc --addr=$(awk 'END{print $1}' /etc/hosts) &
cd /lightspeed/Lightspeed-react/ && serve -s build -l 80

You can use it for some inspiration maybe. 😄

@Janhouse
Copy link

Janhouse commented Jan 4, 2021

This is what I meant:

image

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 4, 2021

Here are the ports per - GRVYDEV/Lightspeed-ingest/issues/15

Ingest - 8084:8084
WebRTC - 65535:8080
Lightspeed / React - 80:80

Ingest - 8084:8084
WebRTC - 65535:65535 && 8080:8080
React 80:80

@jchook
Copy link

jchook commented Jan 4, 2021

If this was converted into a mono repo...

You could also use submodules if you prefer to keep issues, commits, versions, etc separate.

@Janhouse
Copy link

Janhouse commented Jan 4, 2021

I thought that adding a quick workaround by replacing the *addr in ListenAndServe and ListenUDP calls to hard coded "0.0.0.0" would solve it but then I noticed it still sent internal addresses, so I guess it gets them dynamically somewhere. As long as those don't get rewritten to external address, it won't work in Docker or behind any other NAT.

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 4, 2021

I thought that adding a quick workaround by replacing the *addr in ListenAndServe and ListenUDP calls to hard coded "0.0.0.0" would solve it but then I noticed it still sent internal addresses, so I guess it gets them dynamically somewhere. As long as those don't get rewritten to external address, it won't work in Docker or behind any other NAT.

Yeah so if we are behind we will need to use a TURN server for WebRTC

@brettwilcox
Copy link
Author

Something like https://github.com/coturn/coturn?

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 4, 2021

I thought that adding a quick workaround by replacing the *addr in ListenAndServe and ListenUDP calls to hard coded "0.0.0.0" would solve it but then I noticed it still sent internal addresses, so I guess it gets them dynamically somewhere. As long as those don't get rewritten to external address, it won't work in Docker or behind any other NAT.

Screenshots2021 01 04-11 18 51 screenshot

What about something like this?

@brettwilcox
Copy link
Author

brettwilcox commented Jan 4, 2021

I thought that adding a quick workaround by replacing the *addr in ListenAndServe and ListenUDP calls to hard coded "0.0.0.0" would solve it but then I noticed it still sent internal addresses, so I guess it gets them dynamically somewhere. As long as those don't get rewritten to external address, it won't work in Docker or behind any other NAT.

Screenshots2021 01 04-11 18 51 screenshot

What about something like this?

That would work actually. I was playing around with that mode for Consul and Nomad. The networking parts are complicated enough that I am just going to install the binary on the servers. But for an app like this it would be perfect.

@Crowdedlight
Copy link
Contributor

Crowdedlight commented Jan 4, 2021

I am running each service in its own docker, and using a docker-compose for them. Ingest seems to accept input from OBS and OBS is happy. I can access the react site on localhost, but the webRTC is not able to connect to webRTC as I assume it uses internal docker ips, instead of host.

EDIT: I changed it to use network_mode: host in the compose, and that makes everything work on localhost. Although not ideal for a deployment scenario. My branch: https://github.com/Crowdedlight/Project-Lightspeed/tree/feature/docker
image

I might try to deploy the dockers I have made on my own VPS when I got time again. And ideally, I want to incorporate the scripts to add custom streamkeys etc. But the dockerfiles and composer can be used as inspiration. (Ideally they should all also just use the binaries or multi-stage build. Only react uses multistage build atm)

I do not know how easily this carries over to public outfacing ips, but at least it all works on localhost with docker and composes.

I agree that adding the three service-repos as submodules in this top repo would probably make it easier for CI/CD as you can use the submodules directly in the docker files without having to git clone it.

@qdm12
Copy link

qdm12 commented Jan 4, 2021

Great idea, Docker would help launch adoption dramatically I think.

Adding my grain of salt, you can use buildx to build it for multiple CPU architectures (:eyes: arm) using a Github workflow similar to this

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 4, 2021

I am running each service in its own docker, and using a docker-compose for them. Ingest seems to accept input from OBS and OBS is happy. I can access the react site on localhost, but the webRTC is not able to connect to webRTC as I assume it uses internal docker ips, instead of host.

EDIT: I changed it to use network_mode: host in the compose, and that makes everything work on localhost. Although not ideal for a deployment scenario. My branch: https://github.com/Crowdedlight/Project-Lightspeed/tree/feature/docker
image

I might try to deploy the dockers I have made on my own VPS when I got time again. And ideally, I want to incorporate the scripts to add custom streamkeys etc. But the dockerfiles and composer can be used as inspiration. (Ideally they should all also just use the binaries or multi-stage build. Only react uses multistage build atm)

I do not know how easily this carries over to public outfacing ips, but at least it all works on localhost with docker and composes.

I agree that adding the three service-repos as submodules in this top repo would probably make it easier for CI/CD as you can use the submodules directly in the docker files without having to git clone it.

Due to the nature of WebRTC I think if we are going to use docker it will have to be --net=host otherwise we would have to deploy a turn server which would greatly complicate the deployment process and thus render the point of docker kind of useless

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 4, 2021

I am running each service in its own docker, and using a docker-compose for them. Ingest seems to accept input from OBS and OBS is happy. I can access the react site on localhost, but the webRTC is not able to connect to webRTC as I assume it uses internal docker ips, instead of host.

EDIT: I changed it to use network_mode: host in the compose, and that makes everything work on localhost. Although not ideal for a deployment scenario. My branch: https://github.com/Crowdedlight/Project-Lightspeed/tree/feature/docker
image

I might try to deploy the dockers I have made on my own VPS when I got time again. And ideally, I want to incorporate the scripts to add custom streamkeys etc. But the dockerfiles and composer can be used as inspiration. (Ideally they should all also just use the binaries or multi-stage build. Only react uses multistage build atm)

I do not know how easily this carries over to public outfacing ips, but at least it all works on localhost with docker and composes.

I agree that adding the three service-repos as submodules in this top repo would probably make it easier for CI/CD as you can use the submodules directly in the docker files without having to git clone it.

In regards to stream keys I will implement a better system for them. Basically the ingest will make a cfg file that houses the stream key, If you want to reset it you will just run a command. Could you add some deployment instructions to your docker repo? I want to give it a try on my VPS and if it works would like to get it to master!

@Crowdedlight
Copy link
Contributor

Due to the nature of WebRTC I think if we are going to use docker it will have to be --net=host otherwise we would have to deploy a turn server which would greatly complicate the deployment process and thus render the point of docker kind of useless

I think that is an fair assumption. I don't know the inner workings of webRTC enough to judge how and when you would need a turn server. :)

Probably best just to go with --host for time being. That will however make it more important that individual ports on all network services can be configured. As many servers will already have webservers or services running on port 80 or 8080 etc. So it would be a requirement to configure those, to avoid collisions now docker can't map it to custom ports.

In regards to stream keys I will implement a better system for them. Basically the ingest will make a cfg file that houses the stream key, If you want to reset it you will just run a command. Could you add some deployment instructions to your docker repo? I want to give it a try on my VPS and if it works would like to get it to master!

That make sense. I will add some information in the readme and do a push in a bit.

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 4, 2021

Due to the nature of WebRTC I think if we are going to use docker it will have to be --net=host otherwise we would have to deploy a turn server which would greatly complicate the deployment process and thus render the point of docker kind of useless

I think that is an fair assumption. I don't know the inner workings of webRTC enough to judge how and when you would need a turn server. :)

Probably best just to go with --host for time being. That will however make it more important that individual ports on all network services can be configured. As many servers will already have webservers or services running on port 80 or 8080 etc. So it would be a requirement to configure those, to avoid collisions now docker can't map it to custom ports.

In regards to stream keys I will implement a better system for them. Basically the ingest will make a cfg file that houses the stream key, If you want to reset it you will just run a command. Could you add some deployment instructions to your docker repo? I want to give it a try on my VPS and if it works would like to get it to master!

That make sense. I will add some information in the readme and do a push in a bit.

For those that are already running webservers something like NGINX could be used to route a certain subdomain to a different webport. For example gud.software could be my main site then stream.gud.software could route to the react port

@Crowdedlight
Copy link
Contributor

For those that are already running webservers something like NGINX could be used to route a certain subdomain to a different webport. For example gud.software could be my main site then stream.gud.software could route to the react port

I might be wrong here. But I saw the hosts bind port: 0.0.0.0:80 and 0.0.0.0:8080 which means that you bind all addresses and can't use the same port between different services?

@Crowdedlight
Copy link
Contributor

I added a readme to my branch. I hope it clears some stuff up. It also showcases the pitfalls of the current docker setup in terms of setting the right IPs etc. ;-)

https://github.com/Crowdedlight/Project-Lightspeed/blob/feature/docker/docker_README.md

@Janhouse
Copy link

Janhouse commented Jan 4, 2021

You can run TURN server in Docker as well (for example coturn). I don't really know the WebRTC protocol and all the signaling bits but I thought that it is just a matter of adding turn in the mix.

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 4, 2021

I added a readme to my branch. I hope it clears some stuff up. It also showcases the pitfalls of the current docker setup in terms of setting the right IPs etc. ;-)

https://github.com/Crowdedlight/Project-Lightspeed/blob/feature/docker/docker_README.md

Going to give it a try now!

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 4, 2021

I added a readme to my branch. I hope it clears some stuff up. It also showcases the pitfalls of the current docker setup in terms of setting the right IPs etc. ;-)

https://github.com/Crowdedlight/Project-Lightspeed/blob/feature/docker/docker_README.md

absolutely fantastic work! I was able to get it up and running really easily! I would like to merge this into master and then turn the respective folders into submodules

@Crowdedlight
Copy link
Contributor

absolutely fantastic work! I was able to get it up and running really easily! I would like to merge this into master and then turn the respective folders into submodules

I haven't played with submodules and docker that much, but it should be possible to add the dockerfile to the root of each repo and then add each service repo as submodule in this repo. Then have the composer file in root of this repo, and have it reference the submodules. Might be some methodologys that works better if you release binarys from each service instead etc. I believe some examples was given further up in this thread.

And a better method of setting the wsUrl.js that doesn't require modification to the dockerfile itself. :)

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 4, 2021

absolutely fantastic work! I was able to get it up and running really easily! I would like to merge this into master and then turn the respective folders into submodules

I haven't played with submodules and docker that much, but it should be possible to add the dockerfile to the root of each repo and then add each service repo as submodule in this repo. Then have the composer file in root of this repo, and have it reference the submodules. Might be some methodologys that works better if you release binarys from each service instead etc. I believe some examples was given further up in this thread.

And a better method of setting the wsUrl.js that doesn't require modification to the dockerfile itself. :)

I am working on the submodules right now on /feature/submodules as far as react goes we could just grab an ENV var and default to localhost?

@Janhouse
Copy link

Janhouse commented Jan 4, 2021

Would be great if you would update the value during startup so you don't have to recompile it every time you want to adjust some parameters.

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 4, 2021

Could you all review https://github.com/GRVYDEV/Project-Lightspeed/tree/feature/submodules and let me know what you think?

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 6, 2021

You could technically put a variable in an entrypoint: sed ${VARIABLE} && /entrypoint or command: block, which could be read from a .env file.

As a side note, I wrote reactserv which reads static react files from disk, and serves them from memory as well as modify them to set the root url in the react code. You could use a similar approach to serve your react code as well as replace values at start through an env variable. That way you can even have the final image based on scratch (no OS) as long as you use go/rust and compile statically.

So theoretically in the docker compose file I could do command: sed -i "s|stream.gud.software|ENV_VAR|g" build/config.json && serve -s build -l 80 which would replace this command

@Woodham
Copy link

Woodham commented Jan 6, 2021

A simple solution to overriding the websocket url in a running container is to just document how to provide your own config.json file using docker mounts.

@k3d3
Copy link

k3d3 commented Jan 6, 2021

While I'm not sure of the best solution, I would also like some way to change the port number as well, since I'm using port 8085 in my setup.

@k3d3
Copy link

k3d3 commented Jan 6, 2021

I do like that idea too - providing a config.json via docker volume - but that also seems pretty heavyweight for just a url change.

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 6, 2021

While I'm not sure of the best solution, I would also like some way to change the port number as well, since I'm using port 8085 in my setup.

Yup the plan is to have the config change the entire url including port. It sounds like we may mount a custom config and then point react at that

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 6, 2021

While I'm not sure of the best solution, I would also like some way to change the port number as well, since I'm using port 8085 in my setup.

Yup the plan is to have the config change the entire url including port. It sounds like we may mount a custom config and then point react at that

also I will add support for changing the websocket host URL as well in Lightspeed-webrtc once I get this docker stuff finished

@Woodham
Copy link

Woodham commented Jan 6, 2021

I do like that idea too - providing a config.json via docker volume - but that also seems pretty heavyweight for just a url change.

I think it's a fairly common pattern - mounting config files into docker containers. While it's just a url/port change now it also allows further config easily later :)

@sabjorn
Copy link

sabjorn commented Jan 6, 2021

I do like that idea too - providing a config.json via docker volume - but that also seems pretty heavyweight for just a url change.

It's probably not a bad idea to have a config file for this project in general. There does seem to be enough state to require some way of configuring the project (other than using sed) and a single config (volumed read-only into each container) would solve this problem.

Alternatively, as someone mentioned above ENV variables are not a bad idea either. There are a lot of containers which use this.

This is fairly common, for example the official NGINX image uses:

  environment:
   - NGINX_HOST=foobar.com
   - NGINX_PORT=80

although I think this might actually just be a feature of NGINX and not something handles by docker...

If having configuration within the code isn't desired, the entrypoint of each container is also a good place to handle this (obviously, as long as these are runtime configurations. I haven't dug deep enough to see if this is the case).

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 6, 2021

I do like that idea too - providing a config.json via docker volume - but that also seems pretty heavyweight for just a url change.

I think it's a fairly common pattern - mounting config files into docker containers. While it's just a url/port change now it also allows further config easily later :)

Specifically in regards to the react site I know for SURE there will be more config in the future. Especially in respect to monetization when we get to that point (think stripe api key etc). In order to keep this project as general purpose as possible I want to make sure we make smart configuration decisions. Ideally 1 file for the whole project would be GREAT however I am very new to dockerization so I am all ears.

@shish
Copy link

shish commented Jan 6, 2021

Both "one config.json for the project, mounted into each container" and "config stored in .env, which docker-compose passes into the containers via environment variables" sound good to me (though the first gives you all the JSON data types and structure, while the later only does key/value string pairs, so maybe the first is significantly better...?)

(I'm another person hoping for configurable port numbers as I'm already using 8084 :P)

@sabjorn
Copy link

sabjorn commented Jan 6, 2021

@GRVYDEV
okay, so I think this actually just goes deeper in general.

In the short term it likely makes sense to just choose something. Mentioned above:

entrypoint: sed ${VARIABLE} && /entrypoint

This will work and is very easy to configure. It allows you to pass in the variables via an .env file and also set them by passing them in when running docker-compose

here are some references:

essentially, these variables end up being used as "build-args" which can modify the the container during build time.


Now, let's ignore docker for a second because it can basically be thought of as any deployment environment.

So, for your future config needs, I would ignore docker and focus on doing what is right for the project elements. A config file is probably the best idea. But really, spend some time thinking about an extensible method that fits your needs.

@k3d3
Copy link

k3d3 commented Jan 6, 2021

Hmm, after thinking about it, I definitely agree that a mounted config file is the way to go. Especially with docker-compose, those are very simple to make.

Within docker-compose.yml, you'd just add another section under lightspeed_react:

        volumes:
          - ./local/path/to/config.json:/internal/path/to/config.json:ro

The ro keeps it read-only within the container, and can be omitted to make it read-write (though we probably don't want that here).

@sabjorn
Copy link

sabjorn commented Jan 6, 2021

@GRVYDEV there are some things you'll probably want to keep as docker configurable.
For example, the port mappings probably make the most sense being configured (via ENV or .env file) and then keeping the project with default port values.

if this is unclear, basically, in the docker-compose.yml you would change the port number before the : to be:

lightspeed-ingest:     
    restart: on-failure
    build: ingest/
    network_mode: host
    ports:
        - "${LIGHTSPEED_PORT}:8084"

oh, additionally, you can also modify the CMD for a container in the docker-compose.yml so, for the WebRTC container, you could do:

lightspeed-webrtc:     
    restart: on-failure
    build: webrtc/
    network_mode: host
    ports:
        - "8080:8080"
        - "65535:65535/udp"
    command: ["lightspeed-webrtc", "--addr=${IP_ADDRESS}"]

@sabjorn
Copy link

sabjorn commented Jan 6, 2021

Hmmmmm, looking deeper, looks like everything could be setup as a runtime variable configured in docker-compose. The only thing standing out is: wsUrl.js

but this file could have an expression which checks for an ENV, and if that isn't set, sets to default

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 6, 2021

Hmmmmm, looking deeper, looks like everything could be setup as a runtime variable configured in docker-compose. The only thing standing out is: wsUrl.js

but this file could have an expression which checks for an ENV, and if that isn't set, sets to default

wsURL.js is no longer used. Instead there is a config.json file in the build folder

@qdm12
Copy link

qdm12 commented Jan 6, 2021

How about a shell entrypoint. Just make it sed things from env variables and make it call serve... at the end. That solves a lot of it I think and is extensible.

Ideally a statically compiled program to do the shell script role AND the HTTP serving (that is, unless you do server side rendering) would be best. But you can do that later.

@sabjorn
Copy link

sabjorn commented Jan 6, 2021

How about a shell entrypoint. Just make it sed things from env variables and make it call serve... at the end. That solves a lot of it I think and is extensible.

Ideally a statically compiled program to do the shell script role AND the HTTP serving (that is, unless you do server side rendering) would be best. But you can do that later.

ENTRYPOINT would likely be the best place for this script to run from.

Checkout the NGINX entrypoint.

The exec "$@" at the end is particularly useful because it allows the CMD to be run after the entrypoint script has finished.

@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 6, 2021

How about a shell entrypoint. Just make it sed things from env variables and make it call serve... at the end. That solves a lot of it I think and is extensible.

Ideally a statically compiled program to do the shell script role AND the HTTP serving (that is, unless you do server side rendering) would be best. But you can do that later.

While that is a quick fix we will be dealing with a lot of configuration values in the future so I want to be future oriented with this decision

@qdm12
Copy link

qdm12 commented Jan 6, 2021

ENTRYPOINT would likely be the best place for this script to run from.

Yes plus you could convert environment variables to flags to serve as well for convenience.

While that is a quick fix we will be dealing with a lot of configuration values in the future so I want to be future oriented with this decision

A shell entrypoint is quick to do but would be the best way I'd say, wether it's common practice or for ease of use for end users. The entrypoint can always be changed later to something more robust like rust or go without breaking changes.

I'll try to make a PR with a /bin/sh script for Alpine (who needs bash right)

@qdm12
Copy link

qdm12 commented Jan 7, 2021

I opened GRVYDEV/Lightspeed-react#10 with such entrypoint and a few tiny Dockerfile improvements. There is still a few questions to iron out and documentation to add though. Anyone, feel free to criticize 😉

@GRVYDEV GRVYDEV added enhancement New feature or request High Priority These are the most important issues labels Jan 7, 2021
@GRVYDEV
Copy link
Owner

GRVYDEV commented Jan 7, 2021

I opened GRVYDEV/Lightspeed-react#10 with such entrypoint and a few tiny Dockerfile improvements. There is still a few questions to iron out and documentation to add though. Anyone, feel free to criticize wink

I am a Docker noob so I may be missing the point of the entrypoint script but this looks like I should be able to pass it a WEBSOCKET_HOST env var and that will automatically replace it?

@qdm12
Copy link

qdm12 commented Jan 7, 2021

You need to replace the value at runtime, not build time. That way the user doesn't have to build the image himself. To replace it you need a server side entrypoint of some sort, like a shell script. Although for now there is no HTTP server in the image, I think it should be part of the image and entrypoint.

@simon-ebner
Copy link

After having prepared Docker images we should also think of a K8S HELM package for Kubernetes (K8S).

@brettwilcox
Copy link
Author

For the config file, can we consider TOML? It's so much easier to read and work with.

@qdm12
Copy link

qdm12 commented Jan 9, 2021

For the config file, can we consider TOML

Agreed, but the configuration is in JSON for the React project (native format). And I'd think environment variables are easier to use than a configuration file at least for Docker (except for secrets where we should use files).

@GRVYDEV
Copy link
Owner

GRVYDEV commented Feb 11, 2021

Closed with #34

@GRVYDEV GRVYDEV closed this as completed Feb 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request High Priority These are the most important issues
Projects
None yet
Development

No branches or pull requests