-
-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mult-Container Issue Tracker #114
Comments
@immauss please Let me know if multi container setup is Woking ? |
@immauss I tried to run "docker-compose.yml" from "mc-test" without any configuration change, but "ovas_postgresql" container keeps RESTARTING state (around 1hr+ ), other all images are RUNNING state. below logs for reference, ` ovas_postgresql | postgresql ovas_postgresql | Starting postgresql for gvmd !! ovas_postgresql | Starting PostgreSQL... ovas_postgresql | 2022-05-07 11:43:29.100 GMT [14] LOG: skipping missing configuration file "/data/database/postgresql.auto.conf" ovas_postgresql | pg_ctl: directory "/data/database" is not a database cluster directory ovas_gvmd | DB not ready yet ovas_postgresql exited with code 1 ovas_gvmd | DB not ready yet openvas | Waiting for redis ovas_gvmd | DB not ready yet ovas_gvmd | DB not ready yet openvas | Waiting for redis ` |
OK ... my bad ... I've updated the process in the original post for this issue. Problem was ... I started working on a migration path to postgres 13. I checked with Greenbone, and I'm expecting the next iteration to be using 13, so started working on what I hope will be a smoother migration for users. And of course, I used the mc-test directory ..... I've added a working docker-compose.yml for mulit-container setup to the master branch in the "mulit-container" folder. The other one references the still failing auto upgrade. (It's really close though... ) |
Hi @immauss , Just tried to execute "docker-compose.yml" from "multi-container" folder, its executed. Can you let me know login password, I tried default admin/admin, not working. logs for reference `Choosing container start method from: gsad Starting Greenbone Security Assitannt !! Starting Greenbone Security Assistant... (gsad:79): gsad gmp-WARNING **: 18:10:28.372: Authentication failure for 'admin' from 172.20.0.1. Status was 1. stucked like above logs.. Thanks. I have commented on #109 , Please check once. |
it "should" be admin:admin by default. make sure you are not reusing the volumes ( This actually ran me in circles for days will trying to test the auto upgrades for postgresql 13 .... ) I've made the habbit of just removing the volumes before starting things up when testing to make sure I have a clean build. -Scott |
And for anyone else looking around here .... mc-pg13 is working great in my production for almost a week now on postgres 13 !! if you have had any issues, or question ... please add here. Thanks, |
I've been fighting with trying to get this going. I had been running single container, but as part of the pg13 testing, I thought that I'd try multi-container too. I've experienced a ton of issues getting gvmd and openvas to start - it looks like one of them is clobbering /run/redis/redis.sock, which breaks things. Used https://github.com/immauss/openvas/blob/master/mc-test/docker-compose.yml. Postgres is nice and happy. gvmd error?
Then the container restarts. Using just a docker volume for /run, as defined in the docker-compose file. Using a filesystem for /data. Single container with the |
Re-trying as a single container with the new image and I'm unable to start scans.
|
@kjake , did you have any luck with the most recent? the ‘21.04.09’ tag is the most recent multi-container with pg13. -Scott |
Hey Scott, I was away on vacation at this time. Let me re-test in the coming week and get back with you. I had reverted to immauss/openvas:latest, and I'm seeing one issue in that build (my tasks become unscheduled). |
No worries. I hope you had a nice relaxing time .... -Scott |
Closing out in favor of #139 |
Please use this Issue for any thoughts, notes, additions or problems with the mulit-container build.
The mulit-container branch "should" be operational now. If you would like to try it out, here's the path to take.
Clone the git repo.
Copy the multi-container directory to your location of preference ( or just 'cd' to it.)
modify the docker-compose.yml to your liking.
Notes:
- It is defaulted to SKIPSYNC=true, so no NVT sync is performed.
- It also starts a "scannable" container. Check the scannable container logs for its IP and you can use it as a test target. This container has no ports exposed, but is on the same docker network, so you can still scan it.
There is user (scannable) with password: Passw0rd
Then get all the containers running with:
After a short time, check to make sure all of the containers are still running.
docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9d2af3a2359f immauss/openvas:mc01 "/scripts/start.sh o…" 4 days ago Up 4 days (healthy) openvas
134302a914af immauss/openvas:mc01 "/scripts/start.sh g…" 4 days ago Up 4 days (healthy) 0.0.0.0:8080->9392/tcp, :::8080->9392/tcp ovas_gsad
b9f412a472d5 immauss/openvas:mc01 "/scripts/start.sh r…" 4 days ago Up 4 days (healthy) ovas_redis
a5f17c8f7b3e immauss/openvas:mc01 "/scripts/start.sh g…" 4 days ago Up 4 days (healthy) ovas_gvmd
fcf9abd0322f immauss/openvas:mc01 "/scripts/start.sh p…" 4 days ago Up 4 days (healthy) ovas_postgresql
8bda354fa528 immauss/scannable "/bin/bash /entrypoi…" 4 days ago Up 42 minutes scannable
You should see 5 openvas:mc01 containers and a single scannable. If they are all still running, you're good to go.
BTW ... there is a seperate health check for each service, so the healthy status "should" be accurate.
The text was updated successfully, but these errors were encountered: