-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Static cluster membership set, but when a new node outside of the list joins, its Registry and DynamicSupervisor still joins the cluster? #210
Comments
Ah hmm, I wonder if it works to start a Horde.Registry or Horde.DynamicSupervisor and tell it that it will not be part of the cluster. |
Right, so this is what happened in this case, and apparently the new Registry/DynamicSupervisor will still try to join the cluster regardless of the static list, which doesn't actually include it. I guess this is not the intended usage of the static cluster membership. We were trying to use dynamic cluster membership but it didn't work out. Scaling to 4 replicas was also more of a hypothetical test which shouldn't happen in a real k8s cluster with a fixed number of replicas. Still, I wonder if it would be possible to do something in this case and prevent the new Registry/DynamicSupervisor from joining, or maybe just shut it down if it has a |
By the way, when I tried to scale down from 4 to 3 again, an
So I guess this scenario is probably something unexpected for Horde. |
I think you're right that this should at the very least be included in the documentation. I suppose it would be possible to check whether an instance of Horde.Registry was in its own list of members (I guess by ensuring that at least one of the members resolved to |
I believe this is a different scenario to #202. At least, the stacktrace does not match. I have looked into this before, but couldn't find anything obvious. I hope this isn't happening to people on a regular basis, it should be possible to reduce the size of your horde cluster without the whole thing falling apart. |
We are now trying to use static cluster membership, since the dynamic cluster membership seems to be causing issues when one k8s pod becomes temporarily invisible, probably due to some automatic k8s maintenance operations (we're using libcluster's
Kubernetes.DNSSRV
strategy).The
members
argument is specified as a list:The setup is similar for the DynamicSupervisor module.
We have a stateful set deployment with 3 replicas. When I try to scale down the pods with
kubectl scale statefulset service --replica 2
, things seem to work as expected. If I callHorde.Cluster.members(App.Module.Registry)
, I still see the original list.However, if I try to scale up with
kubectl scale statefulset service --replica 4
, the Registry spun up on the new node seems to still join the cluster for some reason. If I runHorde.Cluster.members(App.Module.Registry)
, I see the extra entry{App.Module.Registry, :"[email protected].#{namespace}.svc.cluster.local"}
.Interestingly, even if I scale back to 3 again, that extra Registry remains in the members list, while the extra DynamicSupervisor is gone.
Is this the expected behavior? From the documentation, I thought that Horde should only try to find the members listed in the static list, and not try to add new members to that list. I would expect the Registry and DynamicSupervisor on the fourth node be ignored.
We're using the
Horde.UniformQuorumDistribution
strategy for the Supervisor though I feel that should be irrelevant to the membership issue.The text was updated successfully, but these errors were encountered: