Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add concurrency controls around Cosmos DB #680

Merged
merged 4 commits into from
Oct 1, 2024
Merged

Conversation

mbarnes
Copy link
Collaborator

@mbarnes mbarnes commented Oct 1, 2024

What this PR does

In anticipation of running multiple RP replicas as well as introducing a polling backend for asynchronous operations, we need to add concurrency controls around our Cosmos DB containers to ensure data integrity in the presence of multiple writers.

This PR takes a two-fold approach:

  1. The first and simplest approach is to utilize Optimistic Concurrency Control when updating container items by setting the IfMatchEtag option and allowing for retries in the event of a 412 Precondition Failed error.
    (covered in the 1st commit)

  2. The second approach is to adapt Brian Dunnington1's Cloud Distributed Lock pattern for Cosmos DB (see also this YouTube video of Brian demoing the pattern) to protect critical sections in request handlers. A critical section that requires protection through locks would involve the sequence of checking Cosmos for ongoing asynchronous operations that would block the current request, initiating a create, update, or delete operation through Cluster Service, and writing operation data, resource tags and system metadata to Cosmos.

    The GitHub link explains the pattern best but I'll try to summarize:

    The pattern is somewhat similar to the leasing pattern ARO Classic uses to protect cluster documents, where all the data that needs protection is in one document and the lock (or lease) is built into the same document as the data. The difference here is the lock is its own document, and leverages two built-in features of Cosmos:

    1. The lock container sets a short default time-to-live (TTL) as a fail safe in case the RP crashes or otherwise fails to release a lock. Locks are only held briefly -- like a mutex -- so the TTL should only be a few seconds.
    2. Once a lock is acquired by creating a new container item, the RP uses Optimistic Concurrency Control to renew or release (delete) the lock. Because of the short TTL, the RP must periodically renew the lock if it needs to hold it for longer periods. This is done from a goroutine.

    For our purposes, we introduce a new Cosmos container named "Locks" and the IDs for items in this container are subscription IDs. So effectively the entire subscription is locked for the duration of a PUT, PATCH, or DELETE request (GET requests will not use locking). If a PUT, PATCH, or DELETE request arrives while the subscription is locked, the caller gets a 409 Conflict response with a Retry-After header set to the container's TTL value.

Jira: ARO-10822 - Add concurrency controls around Cosmos DB
Link to demo recording:

Special notes for your reviewer

I think this is working well enough to merge, but it's not foolproof.

Because this is a cloud environment, weird things can happen and the RP can potentially lose the subscription lock it acquired. This should be rare but is still possible. When the lock is lost, the context.Context passed to the request handler is cancelled. Depending on when the context is cancelled, the RP can potentially end up in an inconsistent state.

For example, suppose Cluster Service is taking a really long time to respond to a create cluster request. During that time, the RP somehow loses its subscription lock and the context is cancelled. Cluster Service eventually returns a response, but the context cancellation prevents the RP from recording the new cluster in Cosmos DB.

Footnotes

  1. Brian is a principal software engineering manager at Microsoft.

Matthew Barnes added 4 commits October 1, 2024 08:03
Upserting is fine and convenient in a single-writer scenario, but
we need to start accounting for multiple writers by way of either
multiple RP replicas or a backend async operations component.

The problem with upsert is if an item we're trying to update was
deleted by another writer, we may unintentionally recreate it.

Avoid this by splitting the "Set" methods into separate "Create"
and "Update" methods.

(Deferring an "Update" method for operation documents til later.)
This adapts the Cloud Distributed Lock pattern from:
https://github.com/briandunnington/CloudDistributedLock

See also this YouTube video which walks through the pattern:
https://www.youtube.com/live/Hreew-l5rCQ?si=Zy9mbM86M2Ub8Nh3
Container holds Cloud Distributed Locks by subscription IDs with
a default item TTL of 10 seconds.

(10 seconds was chosen somewhat arbitrarily and may need tuned.)
Prevents conflicts on PUT/PATCH/DELETE requests by locking the
subscription while the request handler runs. If a subscription
lock cannot be acquired, the request returns a "409 Conflict"
status with a "Retry-After" header in the response.
Copy link
Contributor

@mjlshen mjlshen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed a bit offline about using the CreateItem vs UpsertItem API for creating resources. It's technically correct, but we will work on error handling/a background process/a Geneva Action to cleanup stale database records if needed in the future.

@mjlshen mjlshen merged commit d00e408 into main Oct 1, 2024
25 checks passed
@mjlshen mjlshen deleted the database-concurrency branch October 1, 2024 16:02
mjlshen added a commit that referenced this pull request Oct 14, 2024
With the completion of #680 this PR
adds a PodDisruptionBudget to ensure 1 replica is running, but by
default 2 replicas will be running at all times.

Signed-off-by: Michael Shen <[email protected]>
mjlshen added a commit that referenced this pull request Oct 14, 2024
With the completion of #680 this PR
adds a PodDisruptionBudget to ensure 1 replica is running, but by
default 2 replicas will be running at all times.

Signed-off-by: Michael Shen <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants