-
Notifications
You must be signed in to change notification settings - Fork 0
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: Distribute the Server and Agent on two PCs #260
Comments
We tried to use docker swarm. https://docs.docker.com/engine/swarm/swarm-tutorial/#open-protocols-and-ports-between-the-hosts However, we were unable to open our machien ports. We tried it wit ufw and iptables. Both did not work. |
Port status can be checked with |
If you want to start a docker swarm, docker can not run in a rootless mode. Instead you have to use: So far, our concept expects the agent to run on the swarm manager pc, because we include local files with volume mounts. As soon as your To deploy a |
If you want to deploy a service on a specific node, the easiest way is to label that node and constrain the deployment in the
services:
agent:
build:
context: ../
dockerfile: build/docker/agent/Dockerfile
args:
- USER_UID=${DOCKER_HOST_UNIX_UID:-1000}
- USER_GID=${DOCKER_HOST_UNIX_GID:-1000}
image: my_custom_agent_image:latest
deploy:
placement:
constraints:
- node.labels.manager == true
init: true
tty: true everything after |
|
server rendering offscreen:
So far, the first attempt with |
We tried to create a shared volume for the
the docker-compose.yml is modified by using the volume name and adding the volume sperately at the end of the docker-compose file. services:
carla-simulator:
command: /bin/bash CarlaUE4.sh -quality-level=High -world-port=2000 -resx=800 -resy=600 -nosound -carla-settings="/home/carla/CarlaUE4/Config/CustomCarlaSettings.ini"
image: ghcr.io/una-auxme/paf23:leaderboard-2.0
init: true
deploy:
resources:
limits:
memory: 16G
expose:
- 2000
- 2001
- 2002
environment:
- XDG_RUNTIME_DIR=/tmp/runtime-carla
networks:
- carla
volumes:
- x11:/tmp/.X11-unix
volumes:
x11:
paf23: |
A docker node stays in a docker swarm after a PC restart. |
We think about using the manual solution using |
Our latest error: Either there is a package missing in the carla server, or we have an issue with the xhost configuration.
|
We currently use https://www.portainer.io/ to get an overview of our docker images in a local web browser. |
Docker PAF23 Swarm 2024-06-17
luttkule@imech156-u:~/git/paf23$ docker compose -f build/docker-compose.swarm.yml up
WARN[0000] /home/luttkule/git/paf23/build/docker-compose.swarm.yml: `version` is obsolete
[+] Running 1/0
✔ Container build-carla-simulator-1 Created 0.0s
Attaching to carla-simulator-1
carla-simulator-1 | sh: 1: xdg-user-dir: not found
carla-simulator-1 exited with code 1 The following suggests, that the server always outputs the Error is always there
However, if I run the server isolated, then the simulator exits.
Using https://carla.readthedocs.io/en/latest/build_docker/ to run the simplest docker image pssible https://carla.readthedocs.io/en/latest/adv_rendering_options/#off-screen-mode
The following file is successfully executed with
Now I will create a swarm and try to launch the script there.
This https://stackoverflow.com/questions/72029582/docker-compose-returns-error-about-property-devices-when-trying-to-enable-gpu/72691651#72691651 brought me here:
Solutions to Enable Swarm GPU SupportBoth solutions need to follow these steps first:
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
}
}
|
Update your
|
|
We were not able to start a swarm service with the docker image directly. However, we were able to launch a serive that started a docker run command to launch the carla server.
Based on the following search result: https://serverfault.com/a/1089792 |
To test the local connection, we used a pip install of the carla python api, upgraded pip, install pygame and numpy and then we were able to connect to the carla client/world and we loaded a new world.
|
|
|
When you get the following error in your docker service: sh: 1: xdg-user-dir: not found
No protocol specified
error: XDG_RUNTIME_DIR not set in the environment.
No protocol specified
error: XDG_RUNTIME_DIR not set in the environment.
No protocol specified
error: XDG_RUNTIME_DIR not set in the environment. Remember to use: sudo xhost +local: |
Using the carla python api from a second pc does work without major and changing the client argument from 'localhost' to the server host ip-address is sufficient. Only a slight python-api version mismatch is reported. |
We used a new docker image based on 'ubuntu:focal', upgraded pip and installed carla as a pip package. We were able to replicate a connection. |
Next: try to use ssh to launch CarlaUE4.sh natively and rewirte compose file to connect to ssh pc. |
|
b5 uses config files:
This helps to add gpu support defined in This is likely defined in the task file of b5 for the install task Lines 38 to 59 in a890f26
Lines 75 to 79 in a890f26
This enables paf23/build/docker-compose.nvidia.yml Lines 1 to 28 in a890f26
|
Description
Definition of Done
No response
The text was updated successfully, but these errors were encountered: