You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Today cleanup logic is mostly configured through TTL. During traffic peak, origin could take much more space than desired, and it would be ideal if cleanup logic could run more aggressively.
Describe the solution you'd like
Could be something simple like if disk usage > 70%, use TTL * 0.5; disk usage > 80%, use TTL * 0.2; disk usage > 90%, use TTL * 0.1. Or could be some logarithm function based on storage space left.
The code lives in https://github.com/uber/kraken/blob/master/lib/store/cleanup.go
Describe alternatives you've considered
Storage LRU capacity is configurable, but it's about number of blobs, not related to blob size. It's difficult to make it function on size, because then one huge blob could cause a lot of smaller blob to be deleted, causing other issues.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
Today cleanup logic is mostly configured through TTL. During traffic peak, origin could take much more space than desired, and it would be ideal if cleanup logic could run more aggressively.
Describe the solution you'd like
Could be something simple like if disk usage > 70%, use TTL * 0.5; disk usage > 80%, use TTL * 0.2; disk usage > 90%, use TTL * 0.1. Or could be some logarithm function based on storage space left.
The code lives in https://github.com/uber/kraken/blob/master/lib/store/cleanup.go
Describe alternatives you've considered
Storage LRU capacity is configurable, but it's about number of blobs, not related to blob size. It's difficult to make it function on size, because then one huge blob could cause a lot of smaller blob to be deleted, causing other issues.
The text was updated successfully, but these errors were encountered: