Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Affinity Management #293

Open
matthieu-robin opened this issue Aug 19, 2024 · 5 comments
Open

Affinity Management #293

matthieu-robin opened this issue Aug 19, 2024 · 5 comments

Comments

@matthieu-robin
Copy link

Is it possible to manage the affinity by groups of nodes? and manage by tenant? ( taints, region, zone,...)

@gecube
Copy link
Collaborator

gecube commented Aug 19, 2024

Hi! Yes, but please provide a more complete use-case. Example:

"Like a DevOps engineer I want to put a tenants on dedicated nodes of management cluster for the better isolation"
...

@matthieu-robin
Copy link
Author

:-)
The idea:
3 datacenters, multiples hosts per datacenter, stretch cluster between datacenters.
When a user create a VM, DB, etc.. He should be able to select his datacenters, and from storage point of view, where are replication for High Availability beteween datacenter? in the DC or splited between DC?.. etc...
Let me know if I'm not clear enough.

@gecube
Copy link
Collaborator

gecube commented Aug 19, 2024

So you may run into different options.
Like:

  • I don't care at all (so system will place the DB on the first appropriate node in any DC)
  • choose from one of DC: DC1, DC2, DC3
  • use local volume vs replicated volume (this option should be configurable even we are using particular DC).

Which ones do you need?

The idea to have cluster of DB stretched between DC is reasonable as it is the only option to have HA. But it is not the only use-case

@matthieu-robin
Copy link
Author

if we have a VM in DC1 with replication storage. It should be interesting if we can select " Replication between DCs" or "replication between nodes in my DC".
In the first case, the VM can restart on a second DC (in case of DC1 failure)
In the second case, the VM can restart only on a node of the DC1, and no HA in case of DC1 failure.
and the mixed version: I have 2 replicas of my VM on the same DC, and a third replica on a external DC (2 or 3).

The final idea is to purpose to our users to select the level of HA.
What do you think?

@gecube
Copy link
Collaborator

gecube commented Aug 19, 2024

we don't use VM for databases, queues, storage... tenant k8s service is working on top of VMs and you can still order VMs on your own and install something into it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants