-
Hi, I'm trying to implement an elastic cluster based on k8s and Consul for service discovery, searching the discussion and code history I noticed we used to have IMemberDiscoveryService and then deprecated it. Can you advise what's the recommended way to integrate with service discovery? My requirement is basically to create a cluster with ephemeral data that got refreshed periodically by another service (considering tmpfs for wal since we don't really need to persist data but need to replicate data across all cluster nodes), for high availability and scalability we need to be able to dynamically add/remove nodes by simply scaling out/in k8s instances, my thinking is to periodically query Consul for all members and keep the cluster members refreshed. I noticed the IMemberDiscoveryService was deprecated, can you share some context about it so that I can avoid the same mistake when implementing my own service discovery integration? Thank you 🙏 |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
It was deprecated because Raft has its own membership protocol out-of-the-box. There is no need to use maintain external membership list, Raft nodes keep up-to-date list of cluster members, which consistency is guaranteed by the leader. The leader node is responsible for any manipulations with that list: adding, removing, and replication. Check this and this articles first. If you're using |
Beta Was this translation helpful? Give feedback.
It was deprecated because Raft has its own membership protocol out-of-the-box. There is no need to use maintain external membership list, Raft nodes keep up-to-date list of cluster members, which consistency is guaranteed by the leader. The leader node is responsible for any manipulations with that list: adding, removing, and replication.
Check this and this articles first. If you're using
DotNext.AspNetCore.Cluster
library, then check methods of IRaftHttpCluster interface. It has everything you need to add/remove cluster nodes using the leader node. Also, take a look at #122 discussion about K8s hosting.