Replies: 2 comments 3 replies
-
Hi @pil0t I think that my request for supporting Expiration Tokens would also solve this. I have created a PR #112 for the @jodydonetti to review. Essentially you could attach one or more Cancellation Tokens to the cached values, and when the token is cancelled it calls the Remove method locally (and remotely if you have a Distributed cache attached). So instead of scanning the remote cache, you just need to manage some cancellation token sources and cencl the ones you need. |
Beta Was this translation helpful? Give feedback.
-
Hi @pil0t and thanks for using FusionCache!
Eh, that would be interesting to me, too 😅, but there's a catch: FusionCache has been designed to use the 2 common caching abstractions in .NET, Because of this design decision, any available impl of those abstractions can be easily used without anyone having to explicitly create an implementation of a, say, custom Of course this has advantages - like the one mentioned above - but also disadvantages, like being limited to the features defined in those abstractions. So, if there's no "delete by prefix" in those abstractions, that is a show stopper. See this comment for more on this. V1 -> V2But, as pointed out here, I am very very near to finally be able to release a feature-complete v1.0, based on my plans. After that I think I'll start putting the pieces together for a v2, which would probably follow the design highlighted in that comment. AlternativesAnyway, there's a solution you can use today and is a variation of an approach commonly known as "key-based cache expiration" (see here), which I've personally used a couple of times with success. Basically you store in a cache entry a value that is used as a prefix/suffix or anyhow a part of the other cache keys (usually a timestamp, but it can be anything): that drives the construction of the other actual cache keys, with the data you need. When that cached value changes, it automatically changes all the other keys in one fell swoop, and the normal cache expiration takes care of the cleanup in the cache. So for example in your scenario you can have one value per each of these things:
Then you can combine them together into a string and hash it (even a simple md5 in this case would be good, it doesn't need to be cryptographically secure, and md5 is very fast): that hash string would be part of the other cache keys, something like "{hash}:auth:*". Of course you can instead simply concatenate some values together into a string directly, if the values are not that big. In this way as soon as one of the above 3 values will change, the hash (or else) will change, and so all cache keys affected will change too. And again, I used this approach on production systems with a quite big amount of data and requests, and it worked like a charm (and unless you cache data with expirations of days, the cleanup will happen automatically without even noticing). Let me know what you think. Hope this helps! |
Beta Was this translation helpful? Give feedback.
-
Hi,
In some cases, there is requirement to remove multiple cache keys, by wildcard/pattern.
Background:
For example, we have auth data cached with keys like
auth:{userId}
and in case something updated for each and every user - we just update by key, or just remove key by exact match. Data updated on each nodes, everybody happy.But rarely, we have different kind of updates:
In this case we need to expire all
auth:*
keys.Right now I didn't found straightforward way to do this, except querying redis with something like
KEYS auth:*
and then iterating over and callingcache.Remove(key)
one by one.Looks like this is not most efficient way, and in general this should be quick operation.
Expected thing is having something like:
cache.RemoveByPattern("auth:*");
accepting similar to https://redis.io/commands/keys/ pattern.
Beta Was this translation helpful? Give feedback.
All reactions