-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to persist data when used with Docker Swarm ? #263
Comments
Same question here. I guess we have to use BROKER_ID_COMMAND, in the doc there is an example: BROKER_ID_COMMAND: "hostname | awk -F'-' '{print $2}'" However the broker_id must be an integer (starting with 0 and incrementing...) so we need a command that generates such an integer... |
Yeah I got the same issue. My current workaround is have 3 service definitions (kafka1, kafka2, kafka3) with static config for the KAFA_BROKER_ID and placement on the node using:
|
It makes sense, I think it is the way to go and besides I did not like to use deploy: mode: global (it is overkill) Something else that puzzle me. In the start-kafka.sh there is this piece of code that generate a new logs directory every time the container is started (because its host changed)
so in the docker-compose we must add the env KAFKA_LOG_DIRS I believe
|
Here is my swarm stack setup:
Basically I label a few nodes with |
You also need a KAFKA_BROKER_ID for each kafka instance What I did at the end is to assign a docker label for each node, that contains the broker id for this node (a unique integer starting at 1) Then I use the BROKER_ID_COMMAND to look up the node labels and derive the broker id (the command is fairly simple, I don't have it here). Since the command runs inside the container, one must give /var/run/docker.sock as a bind mount inside the container. |
Specifying a broker id is not required, kafka will automatically assign one. The id will be saved in the volume, so if the container is destroyed, and a new one mounts that volume, the same broker id will be used. |
"Reading previous" |
@frranck you should set the KAFKA_LOG_DIRS environment variable. By default this kafka image creates a new folder based on the hostname of the container (which is dynamic in docker). So set the env variable, and mount a volume at that location. |
@raarts How do you connect to those Kafka instances from other services inside the swarm? As far as I get, the hostname is basically the container id and whenever the container starts, that hostname changes. What am I missing? |
@tunix if you look at the stack config file, you'll notice there's a 'kafka' network on which all kafka services are located. The kafka service has Every service that wants to talk to Kafka, needs to be put on the |
I have almost the same configuration but since I wasn't able to telnet to port 9094 of Kafka instances, I wasn't sure whether they're communicating properly. (I currently don't know why but they're not responding to telnet but I can see some logs -regarding partition assignments etc.- of Kafka service) Also, writing Could you please share your thoughts? |
I use Also, from inside a container you cannot bind to the external address, unless you use network mode host for the container, but if you do that then that container cannot be on internal networks. Hope this helps. |
@jordijansen you're right. But when I set log directory I still have the problem that my topics are not persistent and the next time I deploy the stack, kafka doesn't recognize them. However, when I go to the log directory that I set, I see the logs and I also see the directory that has the name of my topics in the previous services. Do you have any idea what the problem is? |
Mostly a question how to get this setup working.
I want to start Kafka on docker swarm. When I reboot the service, I don't want to lost any data. Reading previous, I've already create a volume for
/kafka
. The issue I have right now is the broker id. I understand it need to be static for each instance.How can I make it static when using
deploy: mode: global
??Here is my docker-compose:
The text was updated successfully, but these errors were encountered: