-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for IDistributedCache to the SDK #196
Comments
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
we'll probably use a DI-based contract resolver and load classes the question is whether to use Autofact or something simpler such as Scrutor: https://github.com/khellang/Scrutor |
done in #226 |
Hey @xantari |
@petrsvihlik Opened bug #227 That I found while testing this. |
@petrsvihlik Found another one: #228 |
@petrsvihlik So far it seems to work well except that I've found a different issue I hadn't anticipated. Everyone is working from home for a few months now, and as such we are operating over VPN. Our VPN tunnel maxes out at around 1MB/s (which is another issue altogether) and I don't have super fast upload speed. The BSON serialized data is fairly massive for some reason when being placed into the cache (which is sending/receiving data over the VPN to our SQL Server (which uses in Memory SQL tables so it's supposed to be fast). I have no idea why the BSON objects are so massive, but it took my home page about 99 seconds to load now as it had to store/fetch the cached data over VPN. See here for length of data: You'll see our mega menu is 14 megabytes! I'm trying to figure out why the BSON values are so large. |
BTW, I tested a bit with some of the objects as just JSON, and they are just as big. So it's really just a matter for us to figure out how to get our own local distributed caches setup for each developer that doesn't go over a slow data link... |
BTW, Thanks so much for this feature!!! |
@xantari thank you for testing it! I added an example of how to make it work with a local Redis instance on Windows: https://github.com/Kentico/kontent-delivery-sdk-net/wiki/Caching-responses#distributed-caching---example-from-v1400-rc1 |
@xantari btw I tested ( |
Motivation
Currently, the SDK provides a CacheManager that uses
IMemoryCache
.In order to support more advanced scenarios utilizing commonly used distributed caching libraries such as:
we'd like to introduce support for the
IDistributedCache
interface.Proposed solution
There are several approaches that we can take:
IDeliveryCacheManager
that would be able to serialize theDelivery*Response
objects. Depending on whether the implementation will bebyte[]
-based orstring
-based, it will require some changes in theDelivery*Response
and underlying objects (such as Taxonomy) in terms of decorating them with certain attributes, implementing serializers, etc.If we choose to go this way, I'd vote for:
Delivery*Response
objects to theDeliveryClient
(everything that's related toIModelProvider
orNewtonsoft.Json
)Kentico.Kontent.Delivery.Abstractions
's dependency onNewtonsoft.Json.Linq
dynamic
properties - I think we can replace them with strongly-typed collections of objects (not sure why we haven't done so yet)This approach is best performance-wise as it stores the objects after the strong typing facilitated by
IModelProvider
. And also goes in line with the caching model we've laid out with the introduction of theIDeliveryCacheManager
.This would be advantageous in that we could reuse the newly supported
HttpClientFactory
and use a more standardized approach to caching. However, I'm not sure how the cache eviction would work and whether we could somehow reuse theIDeliveryCacheManager
which is also used to invalidate cache keys (e.g. by webhook calls).DeliveryClient.GetDeliverResponseAsync
Let's try the approach no. 1 first.
Additional context
Delivery*Response
objects in (Binary) serialization of Taxonomies not supported #192The text was updated successfully, but these errors were encountered: