You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 26, 2020. It is now read-only.
Brick process should run on the node post gluster node reboot.
Details on how to reproduce (minimal and precise)
Create a 3 node GCS setup using vagrant.
Create a PVC (brick-mux is not enabled).
Reboot gluster-node-1 and check glustercli volume status on the other gluster nodes.
I have set "systemctl enable glusterd2.service" on the gluster node but for some reason glusterd2 process didn't come up automatically. So, reboot the node again.
This time glusterd2 service started automatically and check glustercli volume status.
Information about the environment:
Glusterd2 version used (e.g. v4.1.0 or master): v6.0-dev.115.gitf469248
Operating system used:
Glusterd2 compiled from sources, as a package (rpm/deb), or container:
Using External ETCD: (yes/no, if yes ETCD version): yes
If container, which container image:
Using kubernetes, openshift, or direct install:
If kubernetes/openshift, is gluster running inside kubernetes/openshift or outside: kubernetes
The text was updated successfully, but these errors were encountered:
Observed behavior
Having a single PVC (without brick-mux enabled), reboot gluster-node-1 and post reboot, brick process on gluster-node-1 is not running.
Below messages are continuously seen in glusterd2 logs,
Expected/desired behavior
Brick process should run on the node post gluster node reboot.
Details on how to reproduce (minimal and precise)
Information about the environment:
The text was updated successfully, but these errors were encountered: