-
Notifications
You must be signed in to change notification settings - Fork 99
Cross-DC discovery + updates? #17
Comments
Hi @e271828- sorry it took so long to respond. I was on a short hiatus when all this went public.
We currently only run a single cluster per region which handles all our traffic. You can setup a different cluster within the same region by changing how the peers discover each other. For ETCD you would change the
I attempted to explain our rationale for not using redis in this blog post, it also touches on the library vs service/sidecar question. https://www.mailgun.com/blog/gubernator-cloud-native-distributed-rate-limiting-microservices
Yes, but mostly I was highlighting config synchronization. If gubernator had a config file of available rate limit definitions that was required at startup. New rate limit definitions required by dependent services would need an out of band process (typically chef or puppet) to ensure the config has the proper definitions before the dependent service could make use of them. This deployment synchronization is cumbersome and error prone. |
@thrawn01 Are your rate limits not shared across regions, then? |
@e271828- Currently we do not share rate limits across regions. The simplest answer is to shard accounts across regions such that an account always uses the same region. This works great if your rate limits are tied to an account (like mailgun's). But for a more flexible use case you could have gubernator forward requests across datacenters. You will incur the latency involved in a round trip call to a remote datacenter but gubernators batching feature would make this efficient for handling many requests. Provided the user is ok with this cross datacenter latency, I feel this could be a decent solution for some. (Cassandra works this way) The real problem for cross datacenter clusters is in keeping the peer list up to date. I don't know of any multi-datacenter key/store systems that would do this well. Any raft based system will be sensitive to latency and might not be a good fit. I'm open to suggestions on how best to solve this issue. Gubernator's peer discovery is pluggable by design so implementing a solution would not be hard, find the right solution for this problem is the hard part. I'm open to alternative solutions, as this sounds like a fun problem to solve! Thanks for bringing this up! |
One of the guys here mentioned using the gossip protocol for peer discovery across data centers. This might be a good solution. https://github.com/hashicorp/memberlist I'll take a stab at making a plugin using this library, at a minimum it would be nice to have a peer discovery setup that has no external dependencies. |
Sounds interesting, look forward to taking a look! |
Finally getting started on true multi-datacenter support. The approach I'm taking here is that the rate limit is shared across data centers but responses to the client don't need to wait for a call to an owning node in a different datacenter. The responses are always from the local datacenter, while To accomplish this, each DC will have their own hash ring, and rate limits will be "owned" by both rings, as such rate limits will need to hash their keys against each datacenter ring. The local owner will aggregate the I'm going to use memberlist to discover the nodes in each datacenter. Also, static node config will be an option. I had toyed with the idea of making a large cluster across data centers where there would be only one owning node and all requests to that node would be forwarded to the owner across DC's. You could still do this with a properly implemented memberlist implementation, but doing this would introduce a ton of cross datacenter chatter and latency of waiting for the other datacenter. It also defeats the purpose of running your app in multiple DC's as if one goes down you lose half of all the rate limits currently in flight. With my proposed approach the rate limits will be owned by both datacenters and only batched |
@thrawn01 The architecture docs don’t seem to cover this. How is multi-geo/multi-cluster intended to work? Are you doing full peering across all of your clusters?
Also, would be interesting to understand better why you chose this route rather than simply shipping a rate limiting library or sidecar hitting a pre-existing memory cache. By “no deployment synchronization with a dependent service” are you referring to data structure changes?
The text was updated successfully, but these errors were encountered: