-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add concurrency controls around Cosmos DB #680
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Upserting is fine and convenient in a single-writer scenario, but we need to start accounting for multiple writers by way of either multiple RP replicas or a backend async operations component. The problem with upsert is if an item we're trying to update was deleted by another writer, we may unintentionally recreate it. Avoid this by splitting the "Set" methods into separate "Create" and "Update" methods. (Deferring an "Update" method for operation documents til later.)
This adapts the Cloud Distributed Lock pattern from: https://github.com/briandunnington/CloudDistributedLock See also this YouTube video which walks through the pattern: https://www.youtube.com/live/Hreew-l5rCQ?si=Zy9mbM86M2Ub8Nh3
Container holds Cloud Distributed Locks by subscription IDs with a default item TTL of 10 seconds. (10 seconds was chosen somewhat arbitrarily and may need tuned.)
Prevents conflicts on PUT/PATCH/DELETE requests by locking the subscription while the request handler runs. If a subscription lock cannot be acquired, the request returns a "409 Conflict" status with a "Retry-After" header in the response.
mjlshen
approved these changes
Oct 1, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed a bit offline about using the CreateItem
vs UpsertItem
API for creating resources. It's technically correct, but we will work on error handling/a background process/a Geneva Action to cleanup stale database records if needed in the future.
mjlshen
added a commit
that referenced
this pull request
Oct 14, 2024
With the completion of #680 this PR adds a PodDisruptionBudget to ensure 1 replica is running, but by default 2 replicas will be running at all times. Signed-off-by: Michael Shen <[email protected]>
mjlshen
added a commit
that referenced
this pull request
Oct 14, 2024
With the completion of #680 this PR adds a PodDisruptionBudget to ensure 1 replica is running, but by default 2 replicas will be running at all times. Signed-off-by: Michael Shen <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What this PR does
In anticipation of running multiple RP replicas as well as introducing a polling backend for asynchronous operations, we need to add concurrency controls around our Cosmos DB containers to ensure data integrity in the presence of multiple writers.
This PR takes a two-fold approach:
The first and simplest approach is to utilize Optimistic Concurrency Control when updating container items by setting the IfMatchEtag option and allowing for retries in the event of a 412 Precondition Failed error.
(covered in the 1st commit)
The second approach is to adapt Brian Dunnington1's Cloud Distributed Lock pattern for Cosmos DB (see also this YouTube video of Brian demoing the pattern) to protect critical sections in request handlers. A critical section that requires protection through locks would involve the sequence of checking Cosmos for ongoing asynchronous operations that would block the current request, initiating a create, update, or delete operation through Cluster Service, and writing operation data, resource tags and system metadata to Cosmos.
The GitHub link explains the pattern best but I'll try to summarize:
The pattern is somewhat similar to the leasing pattern ARO Classic uses to protect cluster documents, where all the data that needs protection is in one document and the lock (or lease) is built into the same document as the data. The difference here is the lock is its own document, and leverages two built-in features of Cosmos:
For our purposes, we introduce a new Cosmos container named "Locks" and the IDs for items in this container are subscription IDs. So effectively the entire subscription is locked for the duration of a PUT, PATCH, or DELETE request (GET requests will not use locking). If a PUT, PATCH, or DELETE request arrives while the subscription is locked, the caller gets a 409 Conflict response with a Retry-After header set to the container's TTL value.
Jira: ARO-10822 - Add concurrency controls around Cosmos DB
Link to demo recording:
Special notes for your reviewer
I think this is working well enough to merge, but it's not foolproof.
Because this is a cloud environment, weird things can happen and the RP can potentially lose the subscription lock it acquired. This should be rare but is still possible. When the lock is lost, the
context.Context
passed to the request handler is cancelled. Depending on when the context is cancelled, the RP can potentially end up in an inconsistent state.For example, suppose Cluster Service is taking a really long time to respond to a create cluster request. During that time, the RP somehow loses its subscription lock and the context is cancelled. Cluster Service eventually returns a response, but the context cancellation prevents the RP from recording the new cluster in Cosmos DB.
Footnotes
Brian is a principal software engineering manager at Microsoft. ↩