Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] AsyncKeyedLock #134

Open
MarkCiliaVincenti opened this issue Apr 10, 2023 · 58 comments
Open

[FEATURE] AsyncKeyedLock #134

MarkCiliaVincenti opened this issue Apr 10, 2023 · 58 comments
Assignees
Labels
enhancement New feature or request
Milestone

Comments

@MarkCiliaVincenti
Copy link
Contributor

I am the author of the AsyncKeyedLock library and I believe that this library would benefit from depending on it. If you're interested, I can make a PR.

@jodydonetti
Copy link
Collaborator

jodydonetti commented Apr 12, 2023

Hi @MarkCiliaVincenti and thanks for your suggestion.

I took a quick look at your library and it's interesting, but FusionCache already has a similar mechanism in the form of various implementations of what I called a core "reactor".

The base abstraction is the IFusionCacheReactor interface which models a way to acquire/release a lock, with support for both sync and async programming models, timeouts, etc: you can see here I've been playing from the very beginning with different implementations, one of which (the "standard") uses a very similar technique as yours.

Currently I've yet to open up the whole reactor thing to the public: I mean, the source is of course already available (not all of them), but I'm not currently exposing the various impls as public because it's an area still under active research with a couple more experiments I'm doing on a separate local branch offline, not yet ready to share.

Can you point me to a potential advantage in using your lib?

Thanks!

@jodydonetti
Copy link
Collaborator

Closing for now since there has not been a response.
Will reopen in case something new will came up.

@MarkCiliaVincenti
Copy link
Contributor Author

Hi, I saw https://github.com/ZiggyCreatures/FusionCache/blob/main/src/ZiggyCreatures.FusionCache/Reactors/FusionCacheReactorUnboundedWithPool.cs now

The library is more or less the same techniques. I'm surprised though why you used striping for the pool and then still used a dictionary. Why not just use lock striping? https://github.com/MarkCiliaVincenti/AsyncKeyedLock/blob/master/AsyncKeyedLock/StripedAsyncKeyedLocker.cs

@jodydonetti
Copy link
Collaborator

Hi @MarkCiliaVincenti , the specific reactor you pointed at (unbounded with pool) is not using a lock striping approach: it is using a normal dictionary (not a concurrent one) with a semaphore-per-key (so 1:1, not striped).

The small lock pool you see is there just to access the semaphore dictionary when it's time to add a new one, instead of using a concurrent dictionary (and after an initial read phase, so the classic double-checked locking).

It is just one of the many experiments I've played with here and there, and not the real one currently used inside FusionCache.

@MarkCiliaVincenti
Copy link
Contributor Author

@jodydonetti, AsyncKeyedLock has matured since we last discussed and you may be interested in adding a dependency to it. I can help with this if you're interested.

@jodydonetti jodydonetti reopened this Jan 10, 2024
@jodydonetti
Copy link
Collaborator

Hi @MarkCiliaVincenti , I have some good news about this!

Since I'm about to get to the big v1.0 release, I'm also finally opening up the internals related to the locking part: because of this with the next release you'll be able to create your own implementation of the memory lock and provide that as a third-party package, just like an implementation of a backplane or something like that.

In this way users will be able to pick and choose whichever they prefer.

Makes sense? Hope you like this.

Will update soon with some details.

@jodydonetti jodydonetti self-assigned this Jan 10, 2024
@jodydonetti jodydonetti added the enhancement New feature or request label Jan 10, 2024
@MarkCiliaVincenti
Copy link
Contributor Author

I'm not entirely sure what you mean Jody. Do you need some changes to the PR?

@MarkCiliaVincenti
Copy link
Contributor Author

Ah I think I get it now. But that would mean needing to create my own package if I understand correctly, or will it be an official FusionCache package which you'd link to from a readme where you explain pros and cons?

@jodydonetti
Copy link
Collaborator

I'm not entirely sure what you mean Jody. Do you need some changes to the PR?

Wait, which PR? Is there one?

@jodydonetti
Copy link
Collaborator

Ah I think I get it now. But that would mean needing to create my own package if I understand correctly, or will it be an official FusionCache package which you'd link to from a readme where you explain pros and cons?

I thought about you creating your own package, just because I thought you would've preferred to do that to have control.
Having said that, I can even create it myself like the serialization adapters I already created for the common serialization libs, no worries.

I'll update you soon then.

@jodydonetti
Copy link
Collaborator

Hi @MarkCiliaVincenti , I just release v0.25.0 🥳

It is now possible to create custom IFusionCacheMemoryLocker designs and implementations, like one with your own AsyncKeyedLock.

I quickly tried to create one and benchmarked it, with these results:

Method FactoryDurationMs Accessors KeysCount Rounds Mean Error StdDev P95 Ratio RatioSD Gen0 Gen1 Gen2 Allocated Alloc Ratio
FusionCache 100 10 100 1 108.3 ms 0.43 ms 0.40 ms 108.8 ms 1.00 0.00 - - - 1773.63 KB 1.00
FusionCacheAsyncKeyedLocker 100 10 100 1 108.2 ms 0.37 ms 0.34 ms 108.6 ms 1.00 0.01 - - - 1558.52 KB 0.88
FusionCache 100 10 100 10 108.7 ms 0.59 ms 0.55 ms 109.5 ms 1.00 0.00 400.0000 200.0000 - 6375.15 KB 1.00
FusionCacheAsyncKeyedLocker 100 10 100 10 108.7 ms 0.43 ms 0.40 ms 109.1 ms 1.00 0.01 400.0000 200.0000 - 6165.36 KB 0.97
FusionCache 100 1000 100 1 231.7 ms 4.63 ms 7.87 ms 244.7 ms 1.00 0.00 10000.0000 9000.0000 1000.0000 112276.46 KB 1.00
FusionCacheAsyncKeyedLocker 100 1000 100 1 244.2 ms 4.84 ms 10.92 ms 261.5 ms 1.06 0.06 11000.0000 10000.0000 1000.0000 123452.45 KB 1.10
FusionCache 100 1000 100 10 507.8 ms 10.04 ms 25.38 ms 542.0 ms 1.00 0.00 34000.0000 23000.0000 6000.0000 393337.04 KB 1.00
FusionCacheAsyncKeyedLocker 100 1000 100 10 534.0 ms 14.71 ms 43.38 ms 592.5 ms 1.06 0.11 34000.0000 23000.0000 5000.0000 403604.58 KB 1.03

As you can see the numbers are roughly the same (I've used the standard one, not the striped), and with higher numbers of keys, accessors and so on it's using a little more resources so I don't feel like taking the extra burden of creating and maintaining a separated package myself.

Having said that you are now free to play with it if interested, and maybe come up with an even better one.

Thanks!

@MarkCiliaVincenti
Copy link
Contributor Author

MarkCiliaVincenti commented Feb 4, 2024

Can you please show me the code you used for benchmarking? Did you enable pooling?

@MarkCiliaVincenti
Copy link
Contributor Author

Also one thing I'd change from your public benchmarks is to not include the cache creation parts within the benchmark itself but in setup methods so that you measure the operation parts.

@MarkCiliaVincenti
Copy link
Contributor Author

Tried a basic experiment (no logging so far and pool size is 20, probably not ideal)

Method FactoryDurationMs Accessors KeysCount Rounds Mean Error StdDev P95 Ratio RatioSD Gen0 Gen1 Gen2 Allocated Alloc Ratio
FusionCache 100 10 100 1 121.7 ms 1.96 ms 1.83 ms 124.3 ms 1.00 0.00 600.0000 200.0000 - 2.6 MB 1.00
FusionCacheAsyncKeyedLocker 100 10 100 1 119.2 ms 2.27 ms 2.43 ms 122.8 ms 0.98 0.03 400.0000 200.0000 - 2.22 MB 0.86
FusionCache 100 10 100 10 132.1 ms 2.64 ms 3.43 ms 137.7 ms 1.00 0.00 1500.0000 750.0000 - 7.28 MB 1.00
FusionCacheAsyncKeyedLocker 100 10 100 10 130.6 ms 2.59 ms 3.72 ms 136.2 ms 0.99 0.04 1750.0000 500.0000 - 6.9 MB 0.95
FusionCache 100 1000 100 1 1,332.3 ms 26.31 ms 53.15 ms 1,406.6 ms 1.00 0.00 34000.0000 13000.0000 2000.0000 181.32 MB 1.00
FusionCacheAsyncKeyedLocker 100 1000 100 1 1,397.1 ms 27.60 ms 69.74 ms 1,515.0 ms 1.06 0.08 34000.0000 13000.0000 3000.0000 178.79 MB 0.99
FusionCache 100 1000 100 10 2,210.3 ms 40.58 ms 37.96 ms 2,275.1 ms 1.00 0.00 111000.0000 30000.0000 2000.0000 578.72 MB 1.00
FusionCacheAsyncKeyedLocker 100 1000 100 10 2,153.9 ms 40.60 ms 67.84 ms 2,258.9 ms 0.98 0.04 78000.0000 26000.0000 5000.0000 459.4 MB 0.79

@MarkCiliaVincenti
Copy link
Contributor Author

With a pool size of 100 instead

Method FactoryDurationMs Accessors KeysCount Rounds Mean Error StdDev P95 Ratio RatioSD Gen0 Gen1 Gen2 Allocated Alloc Ratio
FusionCache 100 10 100 1 121.3 ms 1.93 ms 1.80 ms 123.3 ms 1.00 0.00 600.0000 200.0000 - 2.6 MB 1.00
FusionCacheAsyncKeyedLocker 100 10 100 1 119.7 ms 2.27 ms 2.23 ms 122.8 ms 0.99 0.03 600.0000 200.0000 - 2.21 MB 0.85
FusionCache 100 10 100 10 131.8 ms 2.60 ms 3.89 ms 137.5 ms 1.00 0.00 1250.0000 500.0000 - 7.25 MB 1.00
FusionCacheAsyncKeyedLocker 100 10 100 10 129.4 ms 2.14 ms 2.78 ms 133.6 ms 0.99 0.04 1250.0000 500.0000 - 6.89 MB 0.95
FusionCache 100 1000 100 1 1,403.6 ms 26.98 ms 67.19 ms 1,525.5 ms 1.00 0.00 34000.0000 13000.0000 2000.0000 187.32 MB 1.00
FusionCacheAsyncKeyedLocker 100 1000 100 1 1,369.7 ms 27.15 ms 52.95 ms 1,440.5 ms 0.97 0.07 33000.0000 13000.0000 2000.0000 178.27 MB 0.95
FusionCache 100 1000 100 10 2,148.5 ms 42.16 ms 60.47 ms 2,228.1 ms 1.00 0.00 80000.0000 25000.0000 6000.0000 456.96 MB 1.00
FusionCacheAsyncKeyedLocker 100 1000 100 10 2,096.3 ms 40.93 ms 66.10 ms 2,193.8 ms 0.98 0.04 76000.0000 23000.0000 3000.0000 463.16 MB 1.01

@MarkCiliaVincenti
Copy link
Contributor Author

MarkCiliaVincenti commented Feb 4, 2024

And after adding more overloaded methods and adding logging like the other lockers.

You can check for yourself at https://github.com/MarkCiliaVincenti/FusionCache/tree/AsyncKeyedLocker

Method FactoryDurationMs Accessors KeysCount Rounds Mean Error StdDev P95 Ratio RatioSD Gen0 Gen1 Gen2 Allocated Alloc Ratio
FusionCache 100 10 100 1 121.1 ms 1.07 ms 1.00 ms 122.2 ms 1.00 0.00 600.0000 200.0000 - 2.58 MB 1.00
FusionCacheAsyncKeyedLocker 100 10 100 1 119.4 ms 2.30 ms 2.25 ms 122.7 ms 0.99 0.02 600.0000 200.0000 - 2.24 MB 0.87
FusionCache 100 10 100 10 132.7 ms 2.60 ms 2.78 ms 137.4 ms 1.00 0.00 1500.0000 750.0000 - 7.29 MB 1.00
FusionCacheAsyncKeyedLocker 100 10 100 10 128.9 ms 2.46 ms 2.74 ms 133.1 ms 0.97 0.03 1000.0000 500.0000 - 6.89 MB 0.95
FusionCache 100 1000 100 1 1,383.6 ms 27.58 ms 72.65 ms 1,484.1 ms 1.00 0.00 33000.0000 12000.0000 2000.0000 185.41 MB 1.00
FusionCacheAsyncKeyedLocker 100 1000 100 1 1,386.0 ms 27.52 ms 60.97 ms 1,502.4 ms 1.01 0.08 35000.0000 14000.0000 3000.0000 178.14 MB 0.96
FusionCache 100 1000 100 10 2,178.4 ms 39.43 ms 45.40 ms 2,230.2 ms 1.00 0.00 79000.0000 27000.0000 5000.0000 470.72 MB 1.00
FusionCacheAsyncKeyedLocker 100 1000 100 10 2,163.7 ms 43.24 ms 86.35 ms 2,304.8 ms 1.01 0.04 77000.0000 24000.0000 3000.0000 464.65 MB 0.99

@MarkCiliaVincenti
Copy link
Contributor Author

Updated https://github.com/MarkCiliaVincenti/FusionCache/tree/AsyncKeyedLocker

New benchmarks:

Method FactoryDurationMs Accessors KeysCount Rounds Mean Error StdDev P95 Ratio RatioSD Gen0 Gen1 Gen2 Allocated Alloc Ratio
FusionCache 100 10 100 10 132.9 ms 2.38 ms 2.74 ms 137.0 ms 1.00 0.00 1500.0000 750.0000 - 7.25 MB 1.00
FusionCacheAsyncKeyedLocker 100 10 100 10 129.7 ms 2.48 ms 2.76 ms 134.0 ms 0.98 0.03 1250.0000 500.0000 - 6.91 MB 0.95
FusionCache 100 10 1000 10 342.7 ms 6.48 ms 6.06 ms 351.6 ms 1.00 0.00 16000.0000 7000.0000 3000.0000 70.75 MB 1.00
FusionCacheAsyncKeyedLocker 100 10 1000 10 336.2 ms 6.71 ms 14.73 ms 359.4 ms 0.98 0.04 13000.0000 5000.0000 2000.0000 69.49 MB 0.98
FusionCache 100 1000 100 10 2,179.5 ms 43.46 ms 78.38 ms 2,306.0 ms 1.00 0.00 78000.0000 23000.0000 3000.0000 467.44 MB 1.00
FusionCacheAsyncKeyedLocker 100 1000 100 10 2,153.1 ms 42.33 ms 80.54 ms 2,268.5 ms 0.99 0.05 76000.0000 23000.0000 3000.0000 465.63 MB 1.00
FusionCache 100 1000 1000 10 23,424.7 ms 462.26 ms 601.07 ms 24,193.4 ms 1.00 0.00 757000.0000 253000.0000 4000.0000 4532.05 MB 1.00
FusionCacheAsyncKeyedLocker 100 1000 1000 10 23,199.7 ms 463.65 ms 664.95 ms 24,350.6 ms 0.99 0.03 745000.0000 263000.0000 9000.0000 4501.53 MB 0.99

@jodydonetti
Copy link
Collaborator

Thanks @MarkCiliaVincenti , I'll take a look for sure and will let you know!

@MarkCiliaVincenti
Copy link
Contributor Author

Results from a more powerful PC (I usually like to use my older laptop for benchmarks):

Method FactoryDurationMs Accessors KeysCount Rounds Mean Error StdDev P95 Ratio RatioSD Gen0 Gen1 Gen2 Allocated Alloc Ratio
FusionCache 100 10 100 10 125.6 ms 2.48 ms 3.04 ms 130.5 ms 1.00 0.00 400.0000 200.0000 - 7.36 MB 1.00
FusionCacheAsyncKeyedLocker 100 10 100 10 122.4 ms 1.59 ms 1.33 ms 123.6 ms 0.97 0.03 250.0000 - - 6.98 MB 0.95
FusionCache 100 10 1000 10 244.5 ms 4.40 ms 4.12 ms 249.6 ms 1.00 0.00 3500.0000 1500.0000 500.0000 71.05 MB 1.00
FusionCacheAsyncKeyedLocker 100 10 1000 10 239.4 ms 3.75 ms 3.32 ms 243.2 ms 0.98 0.02 4000.0000 1666.6667 666.6667 70.07 MB 0.99
FusionCache 100 1000 100 10 1,155.7 ms 23.05 ms 46.03 ms 1,246.0 ms 1.00 0.00 55000.0000 19000.0000 3000.0000 946.66 MB 1.00
FusionCacheAsyncKeyedLocker 100 1000 100 10 1,166.2 ms 23.32 ms 28.64 ms 1,200.1 ms 1.00 0.06 37000.0000 13000.0000 4000.0000 639.07 MB 0.68
FusionCache 100 1000 1000 10 9,461.7 ms 208.04 ms 606.85 ms 10,653.9 ms 1.00 0.00 306000.0000 115000.0000 11000.0000 5549.55 MB 1.00
FusionCacheAsyncKeyedLocker 100 1000 1000 10 9,634.1 ms 190.99 ms 522.82 ms 10,610.1 ms 1.01 0.06 281000.0000 120000.0000 10000.0000 5165.03 MB 0.93

@jodydonetti
Copy link
Collaborator

Did you enable pooling?

I used the defaults with just:

var asyncKeyedLocker = new AsyncKeyedLocker<string>();

I've read the wiki about pooling, but I'm not 100% sure about the reasoning behind enabling pooling or not, and in that case how to pick the various options (size, fill).

Can you elaborate a little bit more about these details? Thanks!

@jodydonetti jodydonetti reopened this Feb 5, 2024
@MarkCiliaVincenti
Copy link
Contributor Author

MarkCiliaVincenti commented Feb 5, 2024 via email

@MarkCiliaVincenti
Copy link
Contributor Author

MarkCiliaVincenti commented Feb 5, 2024 via email

@MarkCiliaVincenti
Copy link
Contributor Author

MarkCiliaVincenti commented Feb 5, 2024 via email

@jodydonetti
Copy link
Collaborator

Also didn't realise the string is key so maybe you can get better performance using AsyncKeyedLocker instead of object

Yep, in my tests I've in fact used AsyncKeyedLocker<string>.

@MarkCiliaVincenti
Copy link
Contributor Author

Very interesting, I would like it if you could retest 1000/1000.

I'm still not sure what you will do though, will you create a new package that adds AsyncKeyedLock as a dependency?

@jodydonetti
Copy link
Collaborator

Very interesting, I would like it if you could retest 1000/1000.

You mean 1000 accessors and 100 key count? Sure thing, I'll run it now and will report back with the results.

I'm still not sure what you will do though, will you create a new package that adds AsyncKeyedLock as a dependency?

Yes, that's the idea I meant here:

I'd like to release the new package, which will act as a bridge to your AsyncKeyedLock package, to give users the choice to pick the one they prefer.

I'll also update the official docs to mention it and put the package reference and everything else, like with the serialization packages (see here and here).

Then of course it will be mentioned in the v1.4.0 release notes, public posts, etc.

Basically I want to give users the option to use it, but don't force the extra dependency on everyone.

Is it ok for you?

@MarkCiliaVincenti
Copy link
Contributor Author

Sounds good!

@jodydonetti
Copy link
Collaborator

Here are the results:

Method FactoryDurationMs Accessors KeysCount Rounds Mean Error StdDev P95 Ratio RatioSD Gen0 Gen1 Gen2 Allocated Alloc Ratio
FusionCache 100 1000 1000 1 230.2 ms 4.52 ms 9.14 ms 247.1 ms 1.00 0.05 22666.6667 11666.6667 666.6667 301.31 MB 1.00
FusionCache_AsyncKeyed 100 1000 1000 1 224.3 ms 4.55 ms 13.34 ms 246.0 ms 0.98 0.07 23000.0000 12000.0000 1000.0000 301.58 MB 1.00
FusionCache 100 1000 1000 10 2,242.5 ms 44.11 ms 52.52 ms 2,308.8 ms 1.00 0.03 224000.0000 114000.0000 4000.0000 3004.4 MB 1.00
FusionCache_AsyncKeyed 100 1000 1000 10 2,214.0 ms 34.80 ms 30.85 ms 2,253.4 ms 0.99 0.03 224000.0000 114000.0000 4000.0000 3004.09 MB 1.00

Thougts?

@MarkCiliaVincenti
Copy link
Contributor Author

Just curious what pooling settings you're using and whether or not you're including setup as part of the actual benchmarks. Ideally to be fair, any setup should be excluded from the benchmarks. Creating the dictionary, creating the semaphores for the pool, etc. Those will be created at startup (or when there's load if PoolInitialFill is lower than PoolSize)

@MarkCiliaVincenti
Copy link
Contributor Author

Try setting PoolSize and PoolInitialFill to Environment.ProcessorCount * 2 on startup.

@jodydonetti
Copy link
Collaborator

Just curious what pooling settings you're using

You previously said that:

What you need to know is that v7.0.0 is out which is slightly more performant and which now doesn't require you to set pooling options since they are set by default to what is tried-and-tested.

Therefore I did not set any special values.

and whether or not you're including setup as part of the actual benchmarks.

I initialize FusionCache, including the memory locker, which in turn includes the AsyncKeyedLocker instance, before the benchmark code, so it's not included.

@MarkCiliaVincenti
Copy link
Contributor Author

Yes I did but for benchmarking it's a bit different isn't it? For benchmarking purposes you should always set the same value for PoolInitialFill as PoolSize

@jodydonetti
Copy link
Collaborator

Yes I did but for benchmarking it's a bit different isn't it?

Usually I benchmark with the default settings, otherwise it would feel like cheating (imho, it's a personal thing).
But I can benchmark with some sort of "optimal settings" to see what we can get.

Try setting PoolSize and PoolInitialFill to Environment.ProcessorCount * 2 on startup.

I'm doing this right now, results asap.

@jodydonetti
Copy link
Collaborator

Here are the results:

Method FactoryDurationMs Accessors KeysCount Rounds Mean Error StdDev P95 Ratio RatioSD Gen0 Gen1 Gen2 Allocated Alloc Ratio
FusionCache 100 1000 1000 1 242.4 ms 4.80 ms 10.24 ms 262.2 ms 1.00 0.06 22666.6667 11666.6667 666.6667 300.85 MB 1.00
FusionCache_AsyncKeyed 100 1000 1000 1 221.1 ms 4.68 ms 13.72 ms 244.2 ms 0.91 0.07 22000.0000 11000.0000 1000.0000 299.71 MB 1.00
FusionCache_AsyncKeyedOptimized 100 1000 1000 1 242.0 ms 6.55 ms 19.31 ms 268.9 ms 1.00 0.09 23000.0000 12000.0000 1000.0000 300.54 MB 1.00
FusionCache 100 1000 1000 10 2,305.3 ms 45.25 ms 50.30 ms 2,388.3 ms 1.00 0.03 224000.0000 114000.0000 4000.0000 3005.55 MB 1.00
FusionCache_AsyncKeyed 100 1000 1000 10 2,314.8 ms 44.44 ms 56.20 ms 2,390.9 ms 1.00 0.03 225000.0000 114000.0000 4000.0000 3009.13 MB 1.00
FusionCache_AsyncKeyedOptimized 100 1000 1000 10 2,277.4 ms 43.70 ms 48.57 ms 2,359.6 ms 0.99 0.03 224000.0000 114000.0000 4000.0000 3007.41 MB 1.00

Let me know if you nedd more, but for today I'm good, I need some time off 😅

Will release probably in the next few days.

Thanks!

@jodydonetti
Copy link
Collaborator

Hi @MarkCiliaVincenti , while doing some final benchmarking yesterday I stumbled upon a couple of runs where AsyncKeyedLock seems to get stuck with what looked like a deadlock (the BenchmarkDotNet run simply stopped forever).

Did this ever happened to you? Maybe it's a race condition or something.

I'll post here the benchmark code I've used as soon as I can (daily job right now) so you can take a look at that.

@MarkCiliaVincenti
Copy link
Contributor Author

I haven't had any reports about deadlocks, Jody. During development I sometimes had some issues whilst testing, and every time it turned out to be an issue with the test code rather than the library. Waiting for your code.

@jodydonetti
Copy link
Collaborator

jodydonetti commented Sep 13, 2024

Hi @MarkCiliaVincenti , so here's the output of the benchmark where it got stuck:

// ** Remained 1 (50.0%) benchmark(s) to run. Estimated finish 2024-09-13 22:23 (0h 0m from now) **
Setup power plan (GUID: 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c FriendlyName: High performance)
// **************************
// Benchmark: ParallelComparisonBenchmark.FusionCache_AsyncKeyed: DefaultJob [FactoryDurationMs=100, Accessors=1000, KeysCount=1000, Rounds=1]
// *** Execute ***
// Launch: 1 / 1
// Execute: dotnet 90a949a1-f557-4f8c-a66c-b5bb0df737f1.dll --anonymousPipes 2564 2364 --benchmarkName "ZiggyCreatures.Caching.Fusion.Benchmarks.ParallelComparisonBenchmark.FusionCache_AsyncKeyed(FactoryDurationMs: 100, Accessors: 1000, KeysCount: 1000, Rounds: 1)" --job Default --benchmarkId 1 in C:\Users\indas\source\repos\ZiggyCreatures\FusionCache\benchmarks\ZiggyCreatures.FusionCache.Benchmarks\bin\Release\net8.0\90a949a1-f557-4f8c-a66c-b5bb0df737f1\bin\Release\net8.0
// BeforeAnythingElse

// Benchmark Process Environment Information:
// BenchmarkDotNet v0.14.0
// Runtime=.NET 8.0.8 (8.0.824.36612), X64 RyuJIT AVX2
// GC=Concurrent Workstation
// HardwareIntrinsics=AVX2,AES,BMI1,BMI2,FMA,LZCNT,PCLMUL,POPCNT,AvxVnni,SERIALIZE VectorSize=256
// Job: DefaultJob

OverheadJitting  1: 1 op, 125800.00 ns, 125.8000 us/op

This is memory locker implementation:

using System;
using System.Threading;
using System.Threading.Tasks;
using AsyncKeyedLock;
using Microsoft.Extensions.Logging;

namespace ZiggyCreatures.Caching.Fusion.Locking.AsyncKeyedLocking;

/// <summary>
/// An implementation of <see cref="IFusionCacheMemoryLocker"/> based on AsyncKeyedLock.
/// </summary>
public sealed class AsyncKeyedMemoryLocker
	: IFusionCacheMemoryLocker
{
	private readonly AsyncKeyedLocker<string> _locker;

	/// <summary>
	/// Initializes a new instance of the <see cref="AsyncKeyedLocker"/> class.
	/// </summary>
	public AsyncKeyedMemoryLocker(AsyncKeyedLockOptions? options = null)
	{
		options ??= new AsyncKeyedLockOptions();
		_locker = new AsyncKeyedLocker<string>(options);
	}

	/// <inheritdoc/>
	public async ValueTask<object?> AcquireLockAsync(string cacheName, string cacheInstanceId, string key, string operationId, TimeSpan timeout, ILogger? logger, CancellationToken token)
	{
		var releaser = _locker.GetOrAdd(key);
		var acquired = await releaser.SemaphoreSlim.WaitAsync(timeout, token).ConfigureAwait(false);
		return acquired ? releaser : null;
	}

	/// <inheritdoc/>
	public object? AcquireLock(string cacheName, string cacheInstanceId, string key, string operationId, TimeSpan timeout, ILogger? logger, CancellationToken token)
	{
		var releaser = _locker.GetOrAdd(key);
		var acquired = releaser.SemaphoreSlim.Wait(timeout, token);
		return acquired ? releaser : null;
	}

	/// <inheritdoc/>
	public void ReleaseLock(string cacheName, string cacheInstanceId, string key, string operationId, object? lockObj, ILogger? logger)
	{
		if (lockObj is null)
			return;

		try
		{
			((AsyncKeyedLockReleaser<string>)lockObj).Dispose();
		}
		catch (Exception exc)
		{
			if (logger?.IsEnabled(LogLevel.Warning) ?? false)
				logger.Log(LogLevel.Warning, exc, "FUSION [N={CacheName} I={CacheInstanceId}] (O={CacheOperationId} K={CacheKey}): an error occurred while trying to release a SemaphoreSlim in the memory locker", cacheName, cacheInstanceId, operationId, key);
		}
	}

	// IDISPOSABLE
	private bool disposedValue;
	private void Dispose(bool disposing)
	{
		if (!disposedValue)
		{
			if (disposing)
			{
				if (_locker is not null)
				{
					_locker.Dispose();
				}
			}

			disposedValue = true;
		}
	}

	/// <inheritdoc/>
	public void Dispose()
	{
		Dispose(disposing: true);
		GC.SuppressFinalize(this);
	}
}

This is the benchmark code:

using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Threading.Tasks;
using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Columns;
using BenchmarkDotNet.Configs;
using CacheTower;
using CacheTower.Extensions;
using CacheTower.Providers.Memory;
using EasyCaching.Core;
using LazyCache;
using LazyCache.Providers;
using Microsoft.Extensions.Caching.Memory;
using Microsoft.Extensions.DependencyInjection;
using ZiggyCreatures.Caching.Fusion.Locking;
using ZiggyCreatures.Caching.Fusion.Locking.AsyncKeyedLocking;

namespace ZiggyCreatures.Caching.Fusion.Benchmarks;

[MemoryDiagnoser]
[Config(typeof(Config))]
public class ParallelComparisonBenchmark
{
	private class Config : ManualConfig
	{
		public Config()
		{
			AddColumn(
				StatisticColumn.P95
			);
		}
	}

	[Params(100)]
	public int FactoryDurationMs;

	[Params(1_000)]
	public int Accessors;

	[Params(1_000)]
	public int KeysCount;

	[Params(1)]
	public int Rounds;

	private List<string> Keys = null!;
	private TimeSpan CacheDuration = TimeSpan.FromDays(10);
	private IServiceProvider ServiceProvider = null!;

	private FusionCache _FusionCache = null!;
	private FusionCache _FusionCache_AsyncKeyed = null!;
	private FusionCache _FusionCache_AsyncKeyedOptimized = null!;
	private FusionCache _FusionCache_Probabilistic = null!;
	private CacheStack _CacheTower = null!;
	private IEasyCachingProvider _EasyCaching = null!;
	private CachingService _LazyCache = null!;

	[GlobalSetup]
	public void Setup()
	{
		// SETUP KEYS
		Keys = [];
		for (int i = 0; i < KeysCount; i++)
		{
			var key = Guid.NewGuid().ToString("N") + "-" + i.ToString();
			Keys.Add(key);
		}

		// SETUP DI
		var services = new ServiceCollection();
		services.AddEasyCaching(options => { options.UseInMemory("default"); });
		ServiceProvider = services.BuildServiceProvider();

		// SETUP CACHES
		_FusionCache = new FusionCache(new FusionCacheOptions { DefaultEntryOptions = new FusionCacheEntryOptions(CacheDuration) });
		_FusionCache_AsyncKeyed = new FusionCache(new FusionCacheOptions { DefaultEntryOptions = new FusionCacheEntryOptions(CacheDuration) }, memoryLocker: new AsyncKeyedMemoryLocker());
		_FusionCache_AsyncKeyedOptimized = new FusionCache(new FusionCacheOptions { DefaultEntryOptions = new FusionCacheEntryOptions(CacheDuration) }, memoryLocker: new AsyncKeyedMemoryLocker(new AsyncKeyedLock.AsyncKeyedLockOptions()
		{
			PoolSize = Environment.ProcessorCount * 2,
			PoolInitialFill = Environment.ProcessorCount * 2
		}));
		_FusionCache_Probabilistic = new FusionCache(new FusionCacheOptions { DefaultEntryOptions = new FusionCacheEntryOptions(CacheDuration) }, memoryLocker: new ProbabilisticMemoryLocker());
		_CacheTower = new CacheStack(null, new CacheStackOptions([new MemoryCacheLayer()]) { Extensions = [new AutoCleanupExtension(TimeSpan.FromMinutes(5))] });
		_EasyCaching = ServiceProvider.GetRequiredService<IEasyCachingProviderFactory>().GetCachingProvider("default");
		_LazyCache = new CachingService(new MemoryCacheProvider(new MemoryCache(new MemoryCacheOptions())));
		_LazyCache.DefaultCachePolicy = new CacheDefaults { DefaultCacheDurationSeconds = (int)(CacheDuration.TotalSeconds) };
	}

	[GlobalCleanup]
	public void Cleanup()
	{
		_FusionCache.Dispose();
		_FusionCache_AsyncKeyed.Dispose();
		_FusionCache_AsyncKeyedOptimized.Dispose();
		_FusionCache_Probabilistic.Dispose();
		_CacheTower.DisposeAsync().AsTask().Wait();
	}

	[Benchmark(Baseline = true)]
	public async Task FusionCache()
	{
		for (int i = 0; i < Rounds; i++)
		{
			var tasks = new ConcurrentBag<Task>();

			Parallel.ForEach(Keys, key =>
			{
				Parallel.For(0, Accessors, _ =>
				{
					var t = _FusionCache.GetOrSetAsync<SamplePayload>(
						key,
						async ct =>
						{
							await Task.Delay(FactoryDurationMs).ConfigureAwait(false);
							return new SamplePayload();
						}
					);
					tasks.Add(t.AsTask());
				});
			});

			await Task.WhenAll(tasks).ConfigureAwait(false);
		}
	}

	[Benchmark]
	public async Task FusionCache_AsyncKeyed()
	{
		for (int i = 0; i < Rounds; i++)
		{
			var tasks = new ConcurrentBag<Task>();

			Parallel.ForEach(Keys, key =>
			{
				Parallel.For(0, Accessors, _ =>
				{
					var t = _FusionCache_AsyncKeyed.GetOrSetAsync<SamplePayload>(
						key,
						async ct =>
						{
							await Task.Delay(FactoryDurationMs).ConfigureAwait(false);
							return new SamplePayload();
						}
					);
					tasks.Add(t.AsTask());
				});
			});

			await Task.WhenAll(tasks).ConfigureAwait(false);
		}
	}
}

Let me know if it happens on your machine too, and if you can spot the problem.

Thanks!

@MarkCiliaVincenti
Copy link
Contributor Author

It's late here so I will take a look tomorrow but it would be nice if you could try to reduce as much clutter as possible so we could pinpoint the issue.

@jodydonetti
Copy link
Collaborator

It's late here so I will take a look tomorrow

Of course, whenever you can Mark 🙂

it would be nice if you could try to reduce as much clutter as possible so we could pinpoint the issue
Eh, I tried to create a MRE, but failed in that, meaning that I think that the problem may be in the combo betweeen FusionCache and AsyncKeyedLock.

Will try again tomorrow and update you.

Thanks.

@MarkCiliaVincenti
Copy link
Contributor Author

MarkCiliaVincenti commented Sep 13, 2024

Can you try something real quick? Not in front of a PC so I can't test exactly

First of all see if you can use generics instead of returning object as the releaser.

Secondly since you're passing on a timeout, you shouldn't be doing the if statement checks yourself.

/// <inheritdoc/>
	public async ValueTask<object?> AcquireLockAsync(string cacheName, string cacheInstanceId, string key, string operationId, TimeSpan timeout, ILogger? logger, CancellationToken token)
	{
		var releaser = _locker.GetOrAdd(key);
		var acquired = await releaser.SemaphoreSlim.WaitAsync(timeout, token).ConfigureAwait(false);
		return acquired ? releaser : null;
	}

can be simplified to this:

/// <inheritdoc/>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
	public async ValueTask<object?> AcquireLockAsync(string cacheName, string cacheInstanceId, string key, string operationId, TimeSpan timeout, ILogger? logger, CancellationToken token)
	{
                return _locker.LockAsync(timeout, token).ConfigureAwait(false);
	}

and:

/// <inheritdoc/>
	public void ReleaseLock(string cacheName, string cacheInstanceId, string key, string operationId, object? lockObj, ILogger? logger)
	{
		if (lockObj is null)
			return;

		try
		{
			((AsyncKeyedLockReleaser<string>)lockObj).Dispose();
		}
		catch (Exception exc)
		{
			if (logger?.IsEnabled(LogLevel.Warning) ?? false)
				logger.Log(LogLevel.Warning, exc, "FUSION [N={CacheName} I={CacheInstanceId}] (O={CacheOperationId} K={CacheKey}): an error occurred while trying to release a SemaphoreSlim in the memory locker", cacheName, cacheInstanceId, operationId, key);
		}
	}

to:

/// <inheritdoc/>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
	public void ReleaseLock(string cacheName, string cacheInstanceId, string key, string operationId, object? lockObj, ILogger? logger)
	{
		try
		{
			((AsyncKeyedLockTimeoutReleaser<string>)lockObj).Dispose();
		}
		catch (Exception exc)
		{
			if (logger?.IsEnabled(LogLevel.Warning) ?? false)
				logger.Log(LogLevel.Warning, exc, "FUSION [N={CacheName} I={CacheInstanceId}] (O={CacheOperationId} K={CacheKey}): an error occurred while trying to release a SemaphoreSlim in the memory locker", cacheName, cacheInstanceId, operationId, key);
		}
	}

@MarkCiliaVincenti
Copy link
Contributor Author

I'm looking at the code I sent you yesterday.

When it failed to enter a semaphore you weren't disposing anything, meaning you skip all this code:
https://github.com/MarkCiliaVincenti/AsyncKeyedLock/blob/master/AsyncKeyedLock/AsyncKeyedLockDictionary.cs#L168

        [MethodImpl(MethodImplOptions.AggressiveInlining)]
        public void ReleaseWithoutSemaphoreRelease(AsyncKeyedLockReleaser<TKey> releaser)
        {
            if (PoolingEnabled)
            {
#if NET9_0_OR_GREATER
                releaser.Lock.Enter();
#else
                Monitor.Enter(releaser);
#endif

                if (releaser.ReferenceCount == 1)
                {
                    TryRemove(releaser.Key, out _);
                    releaser.IsNotInUse = true;
#if NET9_0_OR_GREATER
                    releaser.Lock.Exit();
#else
                    Monitor.Exit(releaser);
#endif
                    _pool.PutObject(releaser);
                    return;
                }
                --releaser.ReferenceCount;
#if NET9_0_OR_GREATER
                releaser.Lock.Exit();
#else
                Monitor.Exit(releaser);
#endif
            }
            else
            {
                Monitor.Enter(releaser);

                if (releaser.ReferenceCount == 1)
                {
                    TryRemove(releaser.Key, out _);
                    releaser.IsNotInUse = true;
                    Monitor.Exit(releaser);
                    return;
                }
                --releaser.ReferenceCount;
                Monitor.Exit(releaser);
            }
        }

the reference count wasn't getting decremented, meaning the item remains in the dictionary and the reference count is out of sync. Furthermore the releaser class (which includes the instance of the semaphore) doesn't get put back into the pool for reuse.

I don't think it should deadlock though, but let's start with this first.

@MarkCiliaVincenti
Copy link
Contributor Author

MarkCiliaVincenti commented Sep 14, 2024

Some other notes:

  1. try to avoid object in the interface. At least use ValueTask<IDisposable>
  2. if the timeout is not being used, don't pass it on. It's extra allocations. A class of AsyncKeyedLockTimeoutReleaser<TKey> needs to be created every time you lock. It's not a heavy one. It only has a boolean (enteredsemaphore) and the instance of AsyncKeyedLockReleaser<TKey> which came from the pool, but it's an allocation anyway. So if you're passing on an infinite timeout for example, it would be better if you call an alternate method for it that doesn't accept timeouts. It would be more performant. Do you always have actual timeouts?

@MarkCiliaVincenti
Copy link
Contributor Author

I also understand that you're probably used with receiving either an object or a null, and probably deciding whether to run the code or not based on checking if the return was null. But you can't have it that way. That's going to break the pooling of my library and others (if they have pooling). So you'll need to do some refactoring.

Usually the way you'd use AsyncKeyedLock is to call LockAsync and you'd get the AsyncKeyedLockTimeoutReleaser instance back, which has the boolean EnteredSemaphore in it, and you check against that.

@jodydonetti
Copy link
Collaborator

jodydonetti commented Sep 14, 2024

Hi @MarkCiliaVincenti , yesterday I went to sleep, quite a week to recover from.

  1. try to avoid object in the interface. At least use ValueTask<IDisposable>

If you mean the return value for the AcquireLockAsync, it is an object because it can adapt to any locking strategy.

For example the default implementation uses a SemaphoreSlim, which would then need to be put into some other wrapper class so that it can implement the IDisposable pattern to release the lock (and, so, more allocations).

And yes, I know that SemaphoreSlim already implements IDisposable, but that is not to release the lock but to actually dispose the entire thing, so it's a different thing.

  1. if the timeout is not being used, don't pass it on. It's extra allocations. A class of AsyncKeyedLockTimeoutReleaser<TKey> needs to be created every time you lock. It's not a heavy one. It only has a boolean (enteredsemaphore) and the instance of AsyncKeyedLockReleaser<TKey> which came from the pool, but it's an allocation anyway.

I didn't know that about AsyncKeyedLock, thanks, I'm getting to know it better.
Having said that, I would've expected both methods (with and without the timeout) to return the same class/interface (or different classes deriving from the same base class/interface) for uniformity, and the method taking the timeout param to return the simpler one when the timeout value is infinite (but, to be clear, I can of course check for that beforehand).

So if you're passing on an infinite timeout for example, it would be better if you call an alternate method for it that doesn't accept timeouts. It would be more performant. Do you always have actual timeouts?

It depends on the user's usage, but in general I would not expect to frequently have a non-infinite timeout.

When it failed to enter a semaphore you weren't disposing anything, meaning you skip all this code:
[...]
the reference count wasn't getting decremented, meaning the item remains in the dictionary and the reference count is out of sync. Furthermore the releaser class (which includes the instance of the semaphore) doesn't get put back into the pool for reuse.
[...]
Usually the way you'd use AsyncKeyedLock is to call LockAsync and you'd get the AsyncKeyedLockTimeoutReleaser instance back, which has the boolean EnteredSemaphore in it, and you check against that.

One thing I'd like to understand though is: what is the use-case for returning a special AsyncKeyedLockTimeoutReleaser<TKey>?
It seems to me that it is just a wrapper for an AsyncKeyedLockReleaser<TKey> + the bool indicating if it acquired the lock, basically to know if the lock has been acquired.
But then it should be released (a.k.a. call Dispose on it) nonetheless, so why not just return a nullable one? If it's null, it has not been acquired, otherwise, it is. In this way users cannot forget to release it.

Even because, and this is my other question, what can I do with a AsyncKeyedLockTimeoutReleaser<TKey> where the EnteredSemaphore is false, apart from immediately release (dispose) it? I mean if EnteredSemaphore is false, it means that the critical section is not protected (a.k.a. lock not taken), so why would I want to wait to release it?
And if I don't have to wait to release it but must do it as soon as possible, why should I do it? It can just return null and I would have nothing to release, and null would signal "lock not taken".

The only think I can think of is a rationale like "I want to use the using var x = Lock()" etc expression, but usings can normally handle nulls, so by returning null you would get both things together.

For example this works fine:

using (var foo = (IDisposable?)null) {
  Console.WriteLine($"foo is null: {foo is null}");
}

I'm positive I'm missing something here, maybe some scenarios or design consideration: can you help me understand?

And to be clear: I'm not suggesting to switch to a different design, change your library, or else I'm trying to understand better.

Btw I'm updating the code per your suggestions, will update with the results.

Thanks!

@MarkCiliaVincenti
Copy link
Contributor Author

The reason is simple actually. There's always a reference counter. It needs to be decremented by 1. If it's zero it needs to be removed from the dictionary. Furthermore, the object needs to always be returned to the pool. If I just returned null I can't do anything about this and the dictionary is not cleaned and the pool will not get replenished.

@jodydonetti
Copy link
Collaborator

jodydonetti commented Sep 14, 2024

The reason is simple actually. There's always a reference counter. It needs to be decremented by 1. If it's zero it needs to be removed from the dictionary. Furthermore, the object needs to always be returned to the pool. If I just returned null I can't do anything about this and the dictionary is not cleaned and the pool will not get replenished.

Sorry I'm not sure I understand, or maybe I expressed myself badly.
What I mean is that in your LockAsync method, before returning it, you would do something like this (pseudo code):

function LockAsync(...) {
  [...]
  
  if (EnteredSemaphore) {
    return releaser;
  }
  
  releaser.Dispose();
  return null;
}

If the only thing I can do with a AsyncKeyedLockTimeoutReleaser<TKey> object where EnteredSemaphore is false is immediately call DIspose() on it, it can be done automatically and return null instead to signal that the lock has not been taken.

I think this summarizes the main point I'd like to understand:

Even because, and this is my other question, what can I do with a AsyncKeyedLockTimeoutReleaser where the EnteredSemaphore is false, apart from immediately release (dispose) it? I mean if EnteredSemaphore is false, it means that the critical section is not protected (a.k.a. lock not taken), so why would I want to wait to release it?
And if I don't have to wait to release it but must do it as soon as possible, why should I do it? It can just return null and I would have nothing to release, and null would signal "lock not taken".

Thanks!

@MarkCiliaVincenti
Copy link
Contributor Author

That would introduce a race condition.

Imagine thread A is locked on key ABC
Thread B enters and tries to obtain the lock with a timeout but fails, and immediately that thread is suspended.
Meanwhile thread A finishes.
Thread B resumes and sees that it failed to obtain the semaphore, decrements by 1, and says "ah I can remove this lock from the dictionary, no need for it".
Thread C then comes in and requests a lock for ABC but there's nothing in the database so B and C end up running concurrently.

Furthermore if B puts the object in the pool before it's finished processing there's a chance that it gets picked up by another request whilst B is still processing.

A big recipe for disaster.

These operations can't be done preemptively.

@jodydonetti
Copy link
Collaborator

Getting back to this:

can be simplified to this:

/// <inheritdoc/>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
	public async ValueTask<object?> AcquireLockAsync(string cacheName, string cacheInstanceId, string key, string operationId, TimeSpan timeout, ILogger? logger, CancellationToken token)
	{
                return _locker.LockAsync(timeout, token).ConfigureAwait(false);
	}

Based on your next notes, I also need to handle infinite timeout and lock not getting acquired (because timeout) + immediate release (if I got that part right), so it should be changed to this:

public async ValueTask<object?> AcquireLockAsync(string cacheName, string cacheInstanceId, string key, string operationId, TimeSpan timeout, ILogger? logger, CancellationToken token)
{
  IDisposable? releaser;
  
  if (timeout == Timeout.InfiniteTimeSpan)
  {
	  releaser = await _locker.LockAsync(key, token).ConfigureAwait(false);
  }
  else
  {
	  var tmp = await _locker.LockAsync(key, timeout, token).ConfigureAwait(false);
	  if (tmp.EnteredSemaphore)
	  {
		  releaser = tmp;
	  }
	  else
	  {
		  tmp.Dispose();
		  releaser = null;
	  }
  }
  
  return releaser;
}

Is this correct?

to:

/// <inheritdoc/>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
	public void ReleaseLock(string cacheName, string cacheInstanceId, string key, string operationId, object? lockObj, ILogger? logger)
	{
		try
		{
			((AsyncKeyedLockTimeoutReleaser<string>)lockObj).Dispose();
		}
		catch (Exception exc)
		{
			if (logger?.IsEnabled(LogLevel.Warning) ?? false)
				logger.Log(LogLevel.Warning, exc, "FUSION [N={CacheName} I={CacheInstanceId}] (O={CacheOperationId} K={CacheKey}): an error occurred while trying to release a SemaphoreSlim in the memory locker", cacheName, cacheInstanceId, operationId, key);
		}
	}

Since I have to call 2 different overloads based on the timeout (infinite or not) I would then need to check for both return types, correct?
And (again if I got the "immediately release" part right), also nulls.
Maybe it would be better to just cast it to IDisposable?

@jodydonetti
Copy link
Collaborator

That would introduce a race condition.

Imagine thread A is locked on key ABC Thread B enters and tries to obtain the lock with a timeout but fails, and immediately that thread is suspended. Meanwhile thread A finishes. Thread B resumes and sees that it failed to obtain the semaphore, decrements by 1, and says "ah I can remove this lock from the dictionary, no need for it". Thread C then comes in and requests a lock for ABC but there's nothing in the database so B and C end up running concurrently.

Furthermore if B puts the object in the pool before it's finished processing there's a chance that it gets picked up by another request whilst B is still processing.

A big recipe for disaster.

These operations can't be done preemptively.

I have to think about this race condition a bit, will upadte later.

@MarkCiliaVincenti
Copy link
Contributor Author

As to your penultimate comment, no, you cannot dispose just because you didn't enter the semaphoreslim. This will 100% create race conditions and bugs. You want correctness.

@jodydonetti jodydonetti modified the milestones: v1.4.0, v1.5.0 Sep 15, 2024
@jodydonetti jodydonetti modified the milestones: v1.5.0, v2.0.0 Nov 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants