You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Strelka clients cannot query what files Strelka already has in the Gatekeeper cache. This causes clients to consume excessive bandwidth when clients send duplicate files to the Frontend, where they are hashed and the Gatekeeper cache is utilized.
Describe the solution you'd like
Enhance the Strelka client GRPC protocol to allow the clients to send a hash to the Frontend, and receive a response that includes the Gatekeeper cache status, and optionally the age of the cache entry. Clients receiving a cache hit response SHOULD NOT send the cached file to the Frontend UNLESS the client request is configured to ignore Gatekeeper caching.
Describe alternatives you've considered
It may also be desirable to implement local hash-based de-duplication in the clients, depending on how sensitive an environment is to connection volume, rather than bandwidth. However, global de-duplication is easier to implement and is more useful at large scales.
Additional context
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
Strelka clients cannot query what files Strelka already has in the Gatekeeper cache. This causes clients to consume excessive bandwidth when clients send duplicate files to the Frontend, where they are hashed and the Gatekeeper cache is utilized.
Describe the solution you'd like
Enhance the Strelka client GRPC protocol to allow the clients to send a hash to the Frontend, and receive a response that includes the Gatekeeper cache status, and optionally the age of the cache entry. Clients receiving a cache hit response SHOULD NOT send the cached file to the Frontend UNLESS the client request is configured to ignore Gatekeeper caching.
Describe alternatives you've considered
It may also be desirable to implement local hash-based de-duplication in the clients, depending on how sensitive an environment is to connection volume, rather than bandwidth. However, global de-duplication is easier to implement and is more useful at large scales.
Additional context
The text was updated successfully, but these errors were encountered: