Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ADD] Zabbix components Official docker images #6065

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

dotneft
Copy link

@dotneft dotneft commented Jun 10, 2019

Added Zabbix components for Official docker images.
These images are related to Zabbix monitoring system components.

Also, please, check documentation pull request.


Checklist for Review

NOTE: This checklist is intended for the use of the Official Images maintainers both to track the status of your PR and to help inform you and others of where we're at. As such, please leave the "checking" of items to the repository maintainers. If there is a point below for which you would like to provide additional information or note completion, please do so by commenting on the PR. Thanks! (and thanks for staying patient with us ❤️)

  • associated with or contacted upstream?
  • does it fit into one of the common categories? ("service", "language stack", "base distribution")
  • is it reasonably popular, or does it solve a particular use case well?
  • does a documentation PR exist? (should be reviewed and merged at roughly the same time so that we don't have an empty image page on the Hub for very long)
  • dockerization review for best practices and cache gotchas/improvements (ala the official review guidelines)?
  • 2+ dockerization review?
  • existing official images have been considered as a base? (ie, if foobar needs Node.js, has FROM node:... instead of grabbing node via other means been considered?)
  • if FROM scratch, tarballs only exist in a single commit within the associated history?
  • passes current tests? any simple new tests that might be appropriate to add? (https://github.com/docker-library/official-images/tree/master/test)

@tianon
Copy link
Member

tianon commented Jun 10, 2019

This looks like it's a lot of very similar inter-related images -- is that the case? They all appear to share roughly the same structure with the same version numbers, so they appear to all be part of the larger "Zabbix" release as a whole. Is that accurate?

What I'd like to see here is a restructuring of the tags to fit the conventions of existing official images -- I don't think this warrants 11 separate image repositories for what all amount to essentially components of the greater Zabbix experience (especially with how much that's going to fragment the documentation for each, which I'm guessing aren't designed to be used standalone?).

Here are some ideas for tagging structures that match some other official images:

  • zabbix:3.0.28-agent
  • zabbix:3.0.28-appliance
  • zabbix:4.2.3-java-gateway
  • etc

Then things like alpine or ubuntu become modifiers, ala:

  • zabbix:4.2.3-proxy-mysql-alpine

or

  • zabbix:4.2.3-alpine-proxy-mysql

This helps make it clear that the "4.2.3" portion of that is the general Zabbix release version and that the other portions are the specific component of that Zabbix release that we're interested in.

I'm also curious about the rationale for having both Ubuntu and CentOS-based images -- what's the use case for maintaining both of those? I get the argument for having one or the other of them, but no matter which one you choose it should run fine on either host OS, and then having Alpine alongside that is useful in order to achieve a smaller footprint (assuming Zabbix is officially supported on the Alpine platform / musl libc), but I'm having a hard time understanding the use case for supporting both Ubuntu and CentOS variants. Isn't one or the other of those more highly recommended by Zabbix upstream? (and thus a better candidate for focusing on?)

Additionally, many of the Dockerfiles I looked at appear to be using multi-stage builds, and I'm not sure they're entirely necessary here (or that they make the Dockerfiles themselves more clear), but I'd point to #3383 for what the current status of support in the official-images program at large is on that.

Also, a surprising number of the commits over on https://github.com/zabbix/zabbix-docker appear to be made by root (with no GitHub user association) -- that's not a blocker, but I wanted to point it out in case there's some automation or system somewhere that could do with a proper Git config user.name and user.email set. 👍

@dotneft
Copy link
Author

dotneft commented Jun 11, 2019

This looks like it's a lot of very similar inter-related images -- is that the case? They all appear to share roughly the same structure with the same version numbers, so they appear to all be part of the larger "Zabbix" release as a whole. Is that accurate?
You are absolutely correct! Zabbix software has multiple components:

  1. Zabbix agent
  2. Zabbix server
  3. Zabbix web-interface
  4. Zabbix Java Gateway
  5. Zabbix proxy
    A few of components (server, proxy, web-interface) can work with different databases. We provide SQLite3, MySQL and PostgreSQL based images.
    Also we provide 3 operating systems, because some OS have issues on 3rd party libraries, like, memory leaks, unsupported dependencies and etc, so we have CentOS, Ubuntu and Alpine Linux based images.

What I'd like to see here is a restructuring of the tags to fit the conventions of existing official images -- I don't think this warrants 11 separate image repositories for what all amount to essentially components of the greater Zabbix experience (especially with how much that's going to fragment the documentation for each, which I'm guessing aren't designed to be used standalone?).

Agree, probably it is a good idea to merge different database images. But I'm not sure how merge all into one, because in this case documentation will become huge! It is possible to use components standalone, but usually they are not used as standalone solution. Server and frontend components are mandatory, all others are optional and required for different purposes (like extended monitoring or monitoring remote locations behind NAT).

I'm also curious about the rationale for having both Ubuntu and CentOS-based images -- what's the use case for maintaining both of those? I get the argument for having one or the other of them, but no matter which one you choose it should run fine on either host OS, and then having Alpine alongside that is useful in order to achieve a smaller footprint (assuming Zabbix is officially supported on the Alpine platform / musl libc), but I'm having a hard time understanding the use case for supporting both Ubuntu and CentOS variants. Isn't one or the other of those more highly recommended by Zabbix upstream? (and thus a better candidate for focusing on?)

Actually because of 3rd party library issues :-) But I agree with you it is better to support only one OS. Which one is better (Ubuntu or CentOS) according Docker guidelines?

Additionally, many of the Dockerfiles I looked at appear to be using multi-stage builds, and I'm not sure they're entirely necessary here (or that they make the Dockerfiles themselves more clear), but I'd point to #3383 for what the current status of support in the official-images program at large is on that.

We made it exactly for more clear Dockerfiles and to avoid garbage in final images. If it is not supported officially we can review images and avoid multistage where it is possible. Is It better to avoid it?

Also, a surprising number of the commits over on https://github.com/zabbix/zabbix-docker appear to be made by root (with no GitHub user association) -- that's not a blocker, but I wanted to point it out in case there's some automation or system somewhere that could do with a proper Git config user.name and user.email set. 👍

Sorry for that. Will do it! Thank you!

@dotneft
Copy link
Author

dotneft commented Jul 2, 2019

Hello tianon!

Sorry for bothering you! Did you check my comments?

@tianon
Copy link
Member

tianon commented Jul 2, 2019

Agree, probably it is a good idea to merge different database images.

Indeed -- those should definitely be under the same namespace. Given the assumption that the documentation for each of the five components is going to be very different, I think having five images is likely reasonable.

Which one is better (Ubuntu or CentOS) according Docker guidelines?

If you've got a lot of third-party library issues, I'd recommend choosing the one with the least number of those, but if that's not an easy choice, the best I can offer between those two is that Ubuntu is updated a lot more regularly (and is generally smaller).

We made it exactly for more clear Dockerfiles and to avoid garbage in final images. If it is not supported officially we can review images and avoid multistage where it is possible. Is It better to avoid it?

We definitely recommend avoiding it if it isn't actually necessary -- for RPM-based images, it's a bit complicated to clean up properly, but for APT/deb-based and Alpine-based images it's really not very difficult, and I'd be happy to help with any conversions there.

@dotneft
Copy link
Author

dotneft commented Jul 12, 2019

Thank you for your help!

So finally we think Alpine-based images are the best option! Sorry, probably I did not quite understand you about the same namespace and different images. But what do you think about the following structure:

zabbix-agent: 3.0.28, 3.0, 4.0.10, 4.0, 4.2.4, 4.2, latest
zabbix-java-gateway: 3.0.28, 3.0, 4.0.10, 4.0, 4.2.4, 4.2, latest
zabbix-proxy: 3.0.28-sqlite3, 3.0-sqlite3, 4.0.10-sqlite3, 4.0-sqlite3, 4.2.4-sqlite3, 4.2-sqlite3, sqlite3,
              3.0.28-mysql, 3.0-mysql, 4.0.10-mysql, 4.0-mysql, 4.2.4-mysql, 4.2-mysql, mysql, latest
zabbix-server: 3.0.28-mysql, 3.0-mysql, 4.0.10-mysql, 4.0-mysql, 4.2.4-mysql, 4.2-mysql, mysql,
               3.0.28-pgsql, 3.0-pgsql, 4.0.10-pgsql, 4.0-pgsql, 4.2.4-pgsql, 4.2-pgsql, pgsql, latest
zabbix-web: 3.0.28-apache-mysql, 3.0-apache-mysql, 4.0.10-apache-mysql, 4.0-apache-mysql, 4.2.4-apache-mysql, 4.2-apache-mysql, apache-mysql,
            3.0.28-apache-pgsql, 3.0-apache-pgsql, 4.0.10-apache-pgsql, 4.0-apache-pgsql, 4.2.4-apache-pgsql, 4.2-apache-pgsql, apache-pgsql,
            3.0.28-nginx-mysql, 3.0-nginx-mysql, 4.0.10-nginx-mysql, 4.0-nginx-mysql, 4.2.4-nginx-mysql, 4.2-nginx-mysql, nginx-mysql
            3.0.28-nginx-pgsql, 3.0-nginx-pgsql, 4.0.10-nginx-pgsql, 4.0-nginx-pgsql, 4.2.4-nginx-pgsql, 4.2-nginx-pgsql, nginx-pgsql, latest
zabbix-appliance: 3.0.28, 3.0, 4.0.10, 4.0, 4.2.4, 4.2, latest

Is it correct idea?

I'm not sure that "zabbix-appliance" is needed at all. What do you think?

Also I removed multistage on Alpine images at all. Now I think they are better :-)

I have another question, we have our own repository zabbix/. How it must be usually with non-official repositories? Can we keep it or remove? Is there any restrictions?

@dotneft
Copy link
Author

dotneft commented Jul 19, 2019

Hello!

Could you check my previous comment?

@tianon
Copy link
Member

tianon commented Jul 22, 2019

Is it correct idea?

Yeah, those look more reasonable, although still a bit of a broad set of images and I'm not thrilled that they're not really "generally useful" split apart separately (see #6282 for our TODO item to document that somewhat better).

Also to be clear, there's no requirement to specify latest if you don't want to (especially on things where a choice needs to be made) -- if you'd rather force users to be explicit, that's reasonable too.

I'm not sure that "zabbix-appliance" is needed at all. What do you think?

That does seem unnecessary and unlikely to be acceptable here -- it bundles everything right? It seems better to document appropriately how to run all the components appropriately connected together.

I have another question, we have our own repository zabbix/. How it must be usually with non-official repositories? Can we keep it or remove? Is there any restrictions?

Totally up to you -- you're free to keep it and continue using it, or you can remove it. Some upstreams like to build/push the same things in both places, some like to ditch the org once they have an image, some prefer to use their organization namespace to push experimental or bleeding edge builds, etc.

@sersor
Copy link

sersor commented Aug 12, 2019

Hi!

Guys, we have no answer for 20 days about status of our current submission. Please provide either some comments or next steps we have to take to move forward on this subject.

Thank you in advance!

@tianon
Copy link
Member

tianon commented Aug 13, 2019

Sorry, I think we're still disconnected on the "generally useful" bit -- each of these images is not individually "generally useful" and on top of that we don't have any real prior precedent for such a large single-software sprawl of separate images. I think the ELK or TICK stacks are the closest analogs we have, but in those cases they're still individually useful. Elasticsearch is a generic time-series database. Logstash is a generic log-stream processor (Elasticsearch is simply one optional input/output plugin among many others within it).

Stealing from README-short.txt for each of the remaining images:

  • zabbix-agent: Zabbix agent is deployed on monitoring target to actively monitor local resources and applications.
  • zabbix-java-gateway: Zabbix Java Gateway performs native support for monitoring JMX applications.
  • zabbix-proxy: Zabbix proxy is process that may collect monitoring data from monitored devices.
  • zabbix-server: Zabbix server is the central process of Zabbix software.
  • zabbix-web: Zabbix web-interface

Each of these really seems like a tightly coupled or optional component of a standard Zabbix installation.

What makes Agent and Java Gateway different? What makes Proxy separate from those?

Regarding documentation, we very frequently recommend that maintainers document the minimum and refer folks to external documentation for more detailed usage information -- see https://hub.docker.com/_/registry for a good example of doing that very minimally ("here's a basic docker run line with some follow-up usage of that instance, and here's the link to the documentation for getting more extreme).

It might also help to start with the core subset of Zabbix functionality -- for example, is it strictly necessary for Java Gateway and the Proxy bits to be part of the initial Zabbix official image, or are they optional extra components which could just as easily be added after-the-fact? (or even a 3rd party replacement?) Are these similar to Prometheus's "exporters"?

From what I can see, the core set would be Agent, Server, and Web, and the MVP high-level usage would be as simple as docker run server && docker run web followed by docker run agent on as many nodes as it becomes desirable to monitor?

@sersor
Copy link

sersor commented Aug 13, 2019

Thank you for your reply. We now with all explanation agree to your suggested model. Starting to reconfigure everything. Hope to provide another image in just few days

@tianon
Copy link
Member

tianon commented Aug 13, 2019 via email

@dotneft
Copy link
Author

dotneft commented Aug 13, 2019

Hello! Yes, images will be the same, I will just change repositories into 1 with different tags for Zabbix components.

@tianon
Copy link
Member

tianon commented Aug 13, 2019

Perfect, we should be able to multiplex a little here then. I took a brief look over the Dockerization, and I'm a bit confused by the ~1000 line shell script -- I get the intent (especially given the removed "appliance" image), but for something like "run the agent" it appears to amount to essentially "copy a few environment variables into the config and fire off zabbix_agentd --foreground". Is that accurate? I'm really concerned that it's not very obvious for users what's happening here and what's ultimately going to run. For example, does zabbix_agentd accept any other interesting command-line parameters that users might want to specify?

What we'd typically recommend for this sort of scenario instead is to have a simple script that just sets up the configuration followed by exec "$@" and that's your entrypoint, then your default command is something like zabbix_agentd (see https://github.com/docker-library/redis/blob/3193aad4ff9f2ea482a57d4a296b6541237f682c/5.0/docker-entrypoint.sh for a hyper-simplified example that only fixes filesystem permissions before running redis-server, and only when running redis-server). Changing the command via an environment variable is essentially ignoring Docker's own built-in functionality for doing exactly that.

As-written, it also might give users the impression that they can simply modify a few environment variables and expect the image to do something drastically different, but looking at the compilation flags on Zabbix (and the copied binaries), that's not really the case. There's also reference to supervisord here that won't ever be invoked in any of the images being proposed (and wouldn't be acceptable anyhow -- hence the removal of "appliance").

In the Dockerfile, there are quite a few ARG set, which isn't a blocker by itself but it does raise some eyebrows for me. They appear to simply be convenience variables in many cases (the empty APK_FLAGS_COMMON, APK_FLAGS_DEV, etc)? There are also variables that aren't set (BUILD_DATE, VCS_REF) that we certainly won't be setting (and those are getting embedded in labels which will thus be forever empty).

It seems strange to be setting TERM -- is there a particular reason for doing so?

Image label names should adhere to https://github.com/opencontainers/image-spec/blob/v1.0.1/annotations.md (chiefly maintainer, but you should also know that label-schema is deprecated; see https://github.com/label-schema/label-schema.org#readme -- some of those labels have analogs in the image-spec, some do not).

Those apk del invocations are likely leaving some files behind -- I'd suggest adding --no-network on them.

I noticed the agent appears to have a volume defined on /etc/zabbix/zabbix_agentd.d -- isn't that a configuration directory? Does the image/application expect to modify it in a persistent way that needs to stick around from invocation to invocation? (Given the "permanence" of a VOLUME declaration, we recommend keeping them to just the directories the application stores active state in, and even more specifically active state that's actually necessary to keep if the container is completely destroyed and recreated, such as MySQL's /var/lib/mysql directory.)

Using su for switching users might have odd signal-forwarding and process tree behavior -- I'd recommend checking out https://github.com/tianon/gosu#why and verifying that the images function as expected (especially looking at docker top ... and whether docker stop can gracefully stop the container).

@dotneft
Copy link
Author

dotneft commented Aug 13, 2019

Perfect, we should be able to multiplex a little here then. I took a brief look over the Dockerization, and I'm a bit confused by the ~1000 line shell script -- I get the intent (especially given the removed "appliance" image), but for something like "run the agent" it appears to amount to essentially "copy a few environment variables into the config and fire off zabbix_agentd --foreground". Is that accurate? I'm really concerned that it's not very obvious for users what's happening here and what's ultimately going to run. For example, does zabbix_agentd accept any other interesting command-line parameters that users might want to specify?

Yes, Zabbix agent and others components have additional arguments as well (for example, test metrics, increase/decrease debugging). Actually I use the same shell script for all images, just to have consistent.

As-written, it also might give users the impression that they can simply modify a few environment variables and expect the image to do something drastically different, but looking at the compilation flags on Zabbix (and the copied binaries), that's not really the case. There's also reference to supervisord here that won't ever be invoked in any of the images being proposed (and wouldn't be acceptable anyhow -- hence the removal of "appliance").

Supervisord is used in web Nginx based images. We need php-fpm + nginx there.

In the Dockerfile, there are quite a few ARG set, which isn't a blocker by itself but it does raise some eyebrows for me. They appear to simply be convenience variables in many cases (the empty APK_FLAGS_COMMON, APK_FLAGS_DEV, etc)? There are also variables that aren't set (BUILD_DATE, VCS_REF) that we certainly won't be setting (and those are getting embedded in labels which will thus be forever empty).

These variables are specified to be unified with all images. Some of them require, some not. So you prefer to remove empty ones?

It seems strange to be setting TERM -- is there a particular reason for doing so?

Yes. Zabbix can process SIG_USER. So we need to be sure that only TERM is used to stop containers.

Image label names should adhere to https://github.com/opencontainers/image-spec/blob/v1.0.1/annotations.md (chiefly maintainer, but you should also know that label-schema is deprecated; see https://github.com/label-schema/label-schema.org#readme -- some of those labels have analogs in the image-spec, some do not).

Sorry, did not understand, which one is better to use now?

Those apk del invocations are likely leaving some files behind -- I'd suggest adding --no-network on them.

Will do! Thank you! But anyway I checked all images orphaned files and I did not find them :-)

I noticed the agent appears to have a volume defined on /etc/zabbix/zabbix_agentd.d -- isn't that a configuration directory? Does the image/application expect to modify it in a persistent way that needs to stick around from invocation to invocation? (Given the "permanence" of a VOLUME declaration, we recommend keeping them to just the directories the application stores active state in, and even more specifically active state that's actually necessary to keep if the container is completely destroyed and recreated, such as MySQL's /var/lib/mysql directory.)

Yes, users can extend configuration using additional files. Like virtual hosts in Nginx or Apache.

Using su for switching users might have odd signal-forwarding and process tree behavior -- I'd recommend checking out https://github.com/tianon/gosu#why and verifying that the images function as expected (especially looking at docker top ... and whether docker stop can gracefully stop the container).

I know that issue, but in case of Zabbix no issues at all. I checked it already many times :-)

# docker top zabbix-docker_zabbix-proxy-mysql_1
UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
root                4077                4018                0                   21:04               ?                   00:00:00            /sbin/tini -- /usr/bin/docker-entrypoint.sh
100                 4306                4077                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy --foreground -c /etc/zabbix/zabbix_proxy.conf
100                 5000                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: configuration syncer [loading configuration]
100                 5001                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: heartbeat sender [sending heartbeat message failed in 0.006038 sec, idle 60 sec]
100                 5002                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: data sender [sent 0 values in 0.001873 sec, idle 1 sec]
100                 5003                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: housekeeper [startup idle for 30 minutes]
100                 5004                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: http poller #1 [got 0 values in 0.001266 sec, idle 5 sec]
100                 5005                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: discoverer #1 [processed 0 rules in 0.005073 sec, idle 60 sec]
100                 5006                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: history syncer #1 [processed 0 values in 0.000034 sec, idle 1 sec]
100                 5007                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: history syncer #2 [processed 0 values in 0.000050 sec, idle 1 sec]
100                 5008                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: history syncer #3 [processed 0 values in 0.000034 sec, idle 1 sec]
100                 5009                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: history syncer #4 [processed 0 values in 0.000031 sec, idle 1 sec]
100                 5010                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: self-monitoring [processed data in 0.000047 sec, idle 1 sec]
100                 5011                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: task manager [processed 0 task(s) in 0.000752 sec, idle 5 sec]
100                 5012                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: poller #1 [got 0 values in 0.000027 sec, idle 5 sec]
100                 5013                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: poller #2 [got 0 values in 0.000028 sec, idle 5 sec]
100                 5014                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: poller #3 [got 0 values in 0.000047 sec, idle 5 sec]
100                 5015                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: poller #4 [got 0 values in 0.000022 sec, idle 5 sec]
100                 5018                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: poller #5 [got 0 values in 0.000028 sec, idle 5 sec]
100                 5023                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: unreachable poller #1 [got 0 values in 0.000034 sec, idle 5 sec]
100                 5024                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: trapper #1 [processed data in 0.000000 sec, waiting for connection]
100                 5025                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: trapper #2 [processed data in 0.000000 sec, waiting for connection]
100                 5026                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: trapper #3 [processed data in 0.000000 sec, waiting for connection]
100                 5027                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: trapper #4 [processed data in 0.000000 sec, waiting for connection]
100                 5028                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: trapper #5 [processed data in 0.000000 sec, waiting for connection]
100                 5029                4306                0                   21:04               ?                   00:00:00            /usr/sbin/zabbix_proxy: icmp pinger #1 [got 0 values in 0.000028 sec, idle 5 sec]
# time docker stop zabbix-docker_zabbix-proxy-mysql_1 && docker logs  zabbix-docker_zabbix-proxy-mysql_1 | tail -n10
zabbix-docker_zabbix-proxy-mysql_1

real	0m0.043s
user	0m0.018s
sys	0m0.033s
   138:20190813:210913.983 cannot send proxy data to server at "zabbix-server": proxy "zabbix-proxy-mysql" not found
   138:20190813:210914.985 cannot send proxy data to server at "zabbix-server": proxy "zabbix-proxy-mysql" not found
   138:20190813:210915.989 cannot send proxy data to server at "zabbix-server": proxy "zabbix-proxy-mysql" not found
   138:20190813:210916.992 cannot send proxy data to server at "zabbix-server": proxy "zabbix-proxy-mysql" not found
   138:20190813:210917.994 cannot send proxy data to server at "zabbix-server": proxy "zabbix-proxy-mysql" not found
   138:20190813:210918.998 cannot send proxy data to server at "zabbix-server": proxy "zabbix-proxy-mysql" not found
     6:20190813:210919.944 Got signal [signal:15(SIGTERM),sender_pid:1,sender_uid:0,reason:0]. Exiting ...
   144:20190813:210919.945 syncing history data in progress... 
   144:20190813:210919.945 syncing history data done
     6:20190813:210919.951 Zabbix Proxy stopped. Zabbix 4.0.11 (revision 53bb6bc).

@dotneft
Copy link
Author

dotneft commented Aug 19, 2019

Hello Tianon!

Sorry for bothering you! Did you check my comment?

@tianon
Copy link
Member

tianon commented Aug 20, 2019

Yes, Zabbix agent and others components have additional arguments as well (for example, test metrics, increase/decrease debugging). Actually I use the same shell script for all images, just to have consistent.

Right, I get the intention, but the end result is that it's a very long shell script, and that most of it doesn't apply to any given image that it's included in, and it's a bit hard to follow and understand what it's doing as a result.

Supervisord is used in web Nginx based images. We need php-fpm + nginx there.

Running supervisord with multiple processes is going to be a non-starter -- see https://github.com/docker-library/docs/blob/5160d2b90548fcbfa07f7edf6b971e53921b8ba7/php/README.md#phpversion-fpm for an example of how the php image handles this (and thus also any image that's FROM php does).

These variables are specified to be unified with all images. Some of them require, some not. So you prefer to remove empty ones?

I just don't understand their purpose -- most of them are only used once or twice, and I think it would be more straightforward to just embed their values in the places they are used?

It seems strange to be setting TERM -- is there a particular reason for doing so?

Yes. Zabbix can process SIG_USER. So we need to be sure that only TERM is used to stop containers.

Sorry, I wasn't referring to SIGTERM, but rather TERM=xterm as an environment variable.

Image label names should adhere to https://github.com/opencontainers/image-spec/blob/v1.0.1/annotations.md (chiefly maintainer, but you should also know that label-schema is deprecated; see https://github.com/label-schema/label-schema.org#readme -- some of those labels have analogs in the image-spec, some do not).

Sorry, did not understand, which one is better to use now?

All labels should be using reverse-DNS notation (org.example.foo.bar=baz), and I would recommend sticking to labels explicitly defined by the OCI at https://github.com/opencontainers/image-spec/blob/v1.0.1/annotations.md unless there's a good reason to deviate.

I noticed the agent appears to have a volume defined on /etc/zabbix/zabbix_agentd.d -- isn't that a configuration directory? Does the image/application expect to modify it in a persistent way that needs to stick around from invocation to invocation? (Given the "permanence" of a VOLUME declaration, we recommend keeping them to just the directories the application stores active state in, and even more specifically active state that's actually necessary to keep if the container is completely destroyed and recreated, such as MySQL's /var/lib/mysql directory.)

Yes, users can extend configuration using additional files. Like virtual hosts in Nginx or Apache.

So in that case, I'd recommend ditching the VOLUME on anything that's a user-defined configuration file location so that they're easy to extend via RUN in a Dockerfile that's FROM zabbix:... (with a VOLUME defined, those configuration locations will not be modifiable by a sub image and will require a bind-mount at runtime instead).

@dotneft
Copy link
Author

dotneft commented Aug 20, 2019

  1. Ok. I will update all entrypoints :-)
  2. Hm... do you mean that I need to use two images: web-server + php-fpm with TCP-based socket instead of file socket in one image?
  3. Ok. I will remove unused APK_FLAGS*
  4. It was very long time ago :-) If I remember correctly without the env variable mcedit and some other editors do not work properly and cut screen.
  5. So you are talking about org.opencontainers.image labels, right?
  6. sorry I do not quite understand how RUN can replace VOLUME. In this case to add custom checks enduser will have to build their own image?

@dotneft
Copy link
Author

dotneft commented Aug 20, 2019

  1. In this case I need to prepare two images per each web-server variant. It looks a bit complex then just provide final solution for Zabbix frontend component.

@tianon
Copy link
Member

tianon commented Aug 23, 2019

  1. I'd recommend going with an Apache-based variant to start with since it's much simpler to get working standalone (and perhaps grow into a more complex PHP-FPM variant later)

  1. ok, but how common is it going to be for users to run console editors interactively inside these images/containers? shouldn't that sort of thing be happening on the host instead?

  2. yes, org.opencontainers.image labels (or ditching labels entirely if you prefer)

  3. ok, here's a scenario -- imagine a user wanted to create their own image with some pre-populated configuration so they could deploy it across many environments; they'd typically do something like this:

FROM zabbix:X.Y.Z-foo
RUN ... > /etc/zabbix/...
# or
COPY ... /etc/zabbix/.../

If the /etc/zabbix/... is a VOLUME in zabbix:X.Y.Z-foo, those won't work, so I'd recommend keeping all VOLUME declarations to the bare minimum of directories that the Zabbix software itself expects to be able to change at runtime and have persistent between restarts. Anything user-specified let the users handle themselves if they so desire (either via -v /etc/zabbix/... for a generic volume or -v ...:/etc/zabbix/... for a bind-mount or named volume, for example). This leaves them open to be modifiable in a child image.

@dotneft
Copy link
Author

dotneft commented Sep 5, 2019

Hello Tianon!

I have changed images according your requirements. Currently I merged only 3.0 branch (https://github.com/zabbix/zabbix-docker/tree/3.0). Could you check it and approve if everything is good?

@sersor
Copy link

sersor commented Sep 9, 2019

Hi, Tianon!

May I kindly ask you to a provide your feed-back on recently submitted images?

Thank you in advance!

@sersor
Copy link

sersor commented Sep 16, 2019

Hi, guys!
Any chance we may get your comments on the last submission? It is 11 days passed.
Thank you in advance!

@tianon
Copy link
Member

tianon commented Sep 19, 2019

Looking at just https://github.com/zabbix/zabbix-docker/tree/3.0/agent/alpine for now to simplify review (because this is frankly a lot, and I think many of the others are similar):


ARG APK_FLAGS_PERSISTENT="--clean-protected --no-cache"
ARG APK_FLAGS_DEV="--no-cache"

Hadn't we discussed removing these variables?


ENV TERM=xterm ZBX_VERSION=${ZBX_VERSION} ZBX_SOURCES=${ZBX_SOURCES}

Is TERM necessary for a real-world expected use case of the container?

Also, all the version-number-embedding lines should be moved as far down in the Dockerfile as possible to maximize cache when versions are updated (for example, the block that installs tini, bash, etc should be above the lines specifying version numbers).


    apk update && \

Given that you're using apk --no-cache, using apk update is unnecessary.


            alpine-sdk \

This package is specifically for building Alpine packages (which this isn't doing), so I'd recommend switching to a smaller set of packages to install based on which of that package's dependencies are actually required instead (binutils, gcc, g++, libc-dev, make, etc).


set -eo pipefail

set +e

This seems kind of odd -- why set -e only to immediately set +e?


ZBX_SERVER_HOST=${ZBX_SERVER_HOST:-"zabbix-server"}

You'll likely be interested in https://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html#Shell-Parameter-Expansion, especially :=:

: ${ZBX_SERVER_HOST:="zabbix-server"}

    exec su zabbix -s "/bin/bash" -c "/usr/sbin/zabbix_agentd --foreground -c /etc/zabbix/zabbix_agentd.conf"

Does zabbix_agentd have any other command-line flags that users might be interested in specifying for themselves? (Currently if they take the naive approach of trying something like docker run zabbix:agent --flags or even docker run zabbix:agent zabbix_agentd --flags it will either fail or do unexpected things like run zabbix_agentd as root.)


passes current tests? any simple new tests that might be appropriate to add? (https://github.com/docker-library/official-images/tree/master/test)

(from the NEW-IMAGE-CHECKLIST)

I don't think the existing entrypoint is going to pass the tests as-is.

@dotneft
Copy link
Author

dotneft commented Oct 3, 2019

Hello!

Thank you! Probably I fixed everything you mentioned. Could you check for any other issues in https://github.com/zabbix/zabbix-docker/tree/3.0/ ?

And yes, not it passed test checks :-)

Kind regards,
Alexey

@dotneft
Copy link
Author

dotneft commented Oct 26, 2019

Hello!

Sorry for bothering you! Is there any news?

@dotneft
Copy link
Author

dotneft commented Nov 22, 2019

Any news? ;-)

@dotneft
Copy link
Author

dotneft commented Jan 7, 2020

Hello!

Sorry for bothering you. Did you check my previous comments?

@github-actions
Copy link

github-actions bot commented May 5, 2020

Diff for 6141ef2:
failed fetching repo "zabbix"
unable to find a manifest named "zabbix" (in "/tmp/tmp.bTvop6GFoR/oi/library" or as a remote URL)
cp: cannot stat 'tar/zabbix_3.0-agent/["docker-entrypoint.sh",': No such file or directory

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants