|
1 | | -# L1L2RedisCache |
2 | | - |
3 | | -`L1L2RedisCache` is an implementation of [`IDistributedCache`](https://github.com/dotnet/runtime/blob/main/src/libraries/Microsoft.Extensions.Caching.Abstractions/src/IDistributedCache.cs) with a strong focus on performance. It leverages [`IMemoryCache`](https://github.com/dotnet/runtime/blob/main/src/libraries/Microsoft.Extensions.Caching.Abstractions/src/IMemoryCache.cs) as a level 1 cache and [`RedisCache`](https://github.com/dotnet/aspnetcore/blob/main/src/Caching/StackExchangeRedis/src/RedisCache.cs) as a level 2 cache, with level 1 evictions being managed via [Redis pub/sub](https://redis.io/topics/pubsub). |
4 | | - |
5 | | -`L1L2RedisCache` is heavily inspired by development insights provided over the past several years by [StackOverflow](https://stackoverflow.com/). It attempts to simplify those concepts into a highly accessible `IDistributedCache` implementation that is more performant. |
6 | | - |
7 | | -I expect to gracefully decomission this project when [`StackExchange.Redis`](https://github.com/StackExchange/StackExchange.Redis) has [client-side caching](https://redis.io/docs/latest/develop/use/client-side-caching/) support. |
8 | | - |
9 | | -## Configuration |
10 | | - |
11 | | -It is intended that L1L12RedisCache be used as an `IDistributedCache` implementation. |
12 | | - |
13 | | -`L1L2RedisCache` can be registered during startup with the following `IServiceCollection` extension method: |
| 1 | +# MessagingRedisCache |
14 | 2 |
|
15 | | -``` |
16 | | -services.AddL1L2RedisCache(options => |
17 | | -{ |
18 | | - options.Configuration = "localhost"; |
19 | | - options.InstanceName = "Namespace:Prefix:"; |
20 | | -}); |
21 | | -``` |
| 3 | +[`MessagingRedisCache`](/src/MessagingRedisCache/README.md) is an implementation of [`IDistributedCache`](https://github.com/dotnet/runtime/blob/main/src/libraries/Microsoft.Extensions.Caching.Abstractions/src/IDistributedCache.cs) using [`RedisCache`](https://github.com/dotnet/aspnetcore/blob/main/src/Caching/StackExchangeRedis/src/RedisCache.cs) as a base implementation. `MessagingRedisCache` will utilize [Redis pub/sub](https://redis.io/topics/pubsub) to ensure that cache entries can be synchronized in a distributed system, where direct deferral to Redis is not always performant. Because of this, it is a functional backing store for [`HybridCache`](https://learn.microsoft.com/en-us/aspnet/core/performance/caching/hybrid) which will also evict the `IMemoryCache` entries in distributed systems. |
22 | 4 |
|
23 | | -`L1L2RedisCache` options are an extension of the standard `RedisCache` [`RedisCacheOptions`](https://github.com/dotnet/aspnetcore/blob/main/src/Caching/StackExchangeRedis/src/RedisCacheOptions.cs). The following additional customizations are supported: |
24 | | - |
25 | | -### MessagingType |
26 | | - |
27 | | -The type of messaging system to use for L1 memory cache eviction. |
28 | | - |
29 | | -| MessagingType | Description | Suggestion | |
30 | | -| - | - | - | |
31 | | -| `Default` | Use standard `L1L2RedisCache` [pub/sub](https://redis.io/topics/pubsub) messages for L1 memory cache eviction. | Default behavior. The Redis server requires no additional configuration. | |
32 | | -| `KeyeventNotifications` | Use [keyevent notifications](https://redis.io/topics/notifications) for L1 memory eviction instead of standard `L1L2RedisCache` [pub/sub](https://redis.io/topics/pubsub) messages. The Redis server must have keyevent notifications enabled. | This is only advisable if the Redis server is already using [keyevent notifications](https://redis.io/topics/notifications) with at least a `ghE` configuration and the majority of keys in the server are managed by `L1L2RedisCache`. | |
33 | | -| `KeyspaceNotifications` | Use [keyspace notifications](https://redis.io/topics/notifications) for L1 memory eviction instead of standard `L1L2RedisCache` [pub/sub](https://redis.io/topics/pubsub) messages. The Redis server must have keyspace notifications enabled. | This is only advisable if the Redis server is already using [keyevent notifications](https://redis.io/topics/notifications) with at least a `ghK` configuration and the majority of keys in the server are managed by `L1L2RedisCache`. | |
34 | | - |
35 | | -## Performance |
36 | | - |
37 | | -L1L2RedisCache will generally outperform `RedisCache`, especially in cases of high volume or large cache entries. As entries are opportunistically pulled from memory instead of Redis, costs of latency, network, and Redis operations are avoided. Respective performance gains will rely heavily on the impact of afforementioned factors. |
38 | | - |
39 | | -## Considerations |
| 5 | +# L1L2RedisCache |
40 | 6 |
|
41 | | -Due to the complex nature of a distributed L1 memory cache, cache entries with sliding expirations are only stored in L2 (Redis). These entries will show no performance improvement over the standard `RedisCache`, but incur no performance penalty. |
| 7 | +[`L1L2RedisCache`](/src/L1L2RedisCache/README.MD) is an implementation of [`IDistributedCache`](https://github.com/dotnet/runtime/blob/main/src/libraries/Microsoft.Extensions.Caching.Abstractions/src/IDistributedCache.cs) with emphasis on performance. It leverages [`IMemoryCache`](https://github.com/dotnet/runtime/blob/main/src/libraries/Microsoft.Extensions.Caching.Abstractions/src/IMemoryCache.cs) as a level 1 cache and [`MessagingRedisCache`](/src/MessagingRedisCache/README.md) as a level 2 cache, with level 1 evictions being managed via [Redis pub/sub](https://redis.io/topics/pubsub). |
0 commit comments