Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenFGA: Add request cache to the OpenFGA datastore #14557

Merged
merged 6 commits into from
Dec 11, 2024
4 changes: 0 additions & 4 deletions lxd/auth/drivers/openfga.go
Original file line number Diff line number Diff line change
Expand Up @@ -84,10 +84,6 @@ func (e *embeddedOpenFGA) load(ctx context.Context, identityCache *identity.Cach
server.WithDatastore(opts.openfgaDatastore),
// Use our logger.
server.WithLogger(openfgaLogger{l: e.logger}),
// Set the max concurrency to 1 for both read and check requests.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why did we need this before?

What changed?

The commit message would be improved with a why rather than just a what.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The OpenFGADatastore implementation when first implemented wasn't explicitly safe for concurrency. (i.e. I wasn't really thinking about it on the first pass, I was just trying to get something working 😂). So I added these configs initially for a bit of safety.

I think we could have removed the restrictions before this, because it is safe to have concurrent transactions (even though they will be serialised by DQLite).

With caching there is now a significant benefit to concurrency. Since pre-fetching the cache on the first encountered call means that subsequent calls with the same cache may not need to hit the DB.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With caching there is now a significant benefit to concurrency. Since pre-fetching the cache on the first encountered call means that subsequent calls with the same cache may not need to hit the DB.

So let me check I understand this.

There is a single long-lived openfga embedded instance, so enabling concurrency allows for that openfga server to check access for multiple API requests concurrently right?

But within a single request, the cache exists only for the duration of the request, but what is concurrently happening in the openga server then for a single request? Are you saying that we are pushing down the locking from openga server-level to request-level inside the request cache itself?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW the reason I'm asking is im trying to understand whether we actually need the complexity of 2 rw locks + an atomic variable or whether we can use a single mutex and check for whether the variables themselves are initialized.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@markylaing explained in a call that the openfga server will traverse the module and make concurrent calls to the DB driver (and thus the cache) even for a single API request.

// Our driver cannot perform concurrent reads.
server.WithMaxConcurrentReadsForListObjects(1),
server.WithMaxConcurrentReadsForCheck(1),
// Enable context propagation to the datastore so that the OpenFGA cache
// created for each request can be accessed from our OpenFGA datastore implementation.
server.WithContextPropagationToDatastore(true),
Expand Down