- Avoid to use recent versions (
v0.1.12
or newer) oftriomphe
crate to keep our MSRV (Minimum Supported Rust Version) at Rust 1.65 (#426, by @eaufavor).[email protected]
requires Rust 1.76 or newer, so it will not compile with our MSRV.
- docs: Fix per-entry expiration policy documentation (#421, by @arcstur).
- Ensure a single call to
run_pending_tasks
to evict as many entries as possible from the cache (#417).
- Fixed a bug in
future::Cache
that pendingrun_pending_tasks
calls may cause infinite busy loop in an internalschedule_write_op
method (#412):- This bug was introduced in
v0.12.0
when the background threads were removed fromfuture::Cache
. - This bug can occur when
run_pending_task
method is called by user code while cache is receiving a very high number of concurrent cache write operations. (e.g.insert
,get_with
,invalidate
etc.) - When it occurs, the
schedule_write_op
method will be spinning in a busy loop forever, causing high CPU usage and all other async tasks to be starved.
- This bug was introduced in
- Upgraded
async-lock
crate used byfuture::Cache
fromv2.4
to the latestv3.3
.
- Added support for a plain LRU (Least Recently Used) eviction policy
(#390):
- The LRU policy is enabled by calling the
eviction_policy
method of the cache builder with a policy obtained byEvictionPolicy::lru
function. - The default eviction policy remains the TinyLFU (Tiny, Least Frequently Used) as it maintains better hit rate than LRU for most use cases. TinyLFU combines LRU eviction policy and popularity-based admission policy. A probabilistic data structure is used to estimate historical popularity of both hit and missed keys. (not only the keys currently in the cache.)
- However, some use cases may prefer LRU policy over TinyLFU. An example is recency biased workload such as streaming data processing. LRU policy can be used for them to achieve better hit rate.
- Note that we are planning to add an adaptive eviction/admission policy called Window-TinyLFU in the future. It will adjust the balance between recency and frequency based on the current workload.
- The LRU policy is enabled by calling the
- Ensure
crossbeam-epoch
to run GC when dropping a cache (#384):crossbeam-epoch
crate provides an epoch-based memory reclamation scheme for concurrent data structures. It is used by Moka cache to safely drop cached entries while they are still being accessed by other threads.crossbeam-epoch
does its best to reclaim memory (drop the entries evicted from the cache) when the epoch is advanced. However, it does not guarantee that memory will be reclaimed immediately after the epoch is advanced. This means that entries can remain in the memory for a while after the cache is dropped.- This fix ensures that, when a cache is dropped, the epoch is advanced and
crossbeam-epoch
's thread local buffers are flushed, helping to reclaim memory immediately. - Note that there are still chances that some entries remain in the memory for a
while after a cache is dropped. We are looking for alternatives to
crossbeam-epoch
to improve this situation (e.g. #385).
- Added an example for reinserting expired entries to the cache. (#382)
- Added the upsert and compute methods for modifying a cached entry
(#370):
- Now the
entry
andentry_by_ref
APIs have the following methods:and_upsert_with
method to insert or update the entry.and_compute_with
method to insert, update, remove or do nothing on the entry.and_try_compute_with
method, which is similar to above but returnsResult
.
- Now the
- Raised the version requirement of the
quanta
from>=0.11.0, <0.12.0
to>=0.12.2, <0.13.0
to avoid under-measuring the elapsed time on Apple silicon Macs (#376).- Due to this under-measurement, cached entries on macOS arm64 can expire sightly later than expected.
- Prevent timing issues in writes that cause inconsistencies between the cache's
internal data structures (#348):
- One way to trigger the issue is that insert the same key twice quickly, once
when the cache is full and a second time when there is a room in the cache.
- When it occurs, the cache will not return the value inserted in the second
call (which is wrong), and the
entry_count
method will keep returning a non zero value after calling theinvalidate_all
method (which is also wrong).
- When it occurs, the cache will not return the value inserted in the second
call (which is wrong), and the
- One way to trigger the issue is that insert the same key twice quickly, once
when the cache is full and a second time when there is a room in the cache.
- Now the last access time of a cached entry is updated immediately after the entry
is read (#363):
- When the time-to-idle of a cache is set, the last access time of a cached entry is used to determine if the entry has been expired.
- Before this fix, the access time was updated (to the time when it was read) when pending tasks were processed. This delay caused issue that some entries become temporarily unavailable for reads even though they have been accessed recently. And then they will become available again after the pending tasks are processed.
- Now the last access time is updated immediately after the entry is read. The entry will remain valid until the time-to-idle has elapsed.
Note that both of #348 and #363 were already present
in v0.11.x
and older versions. However they were less likely to occur because they
had background threads to periodically process pending tasks. So there were much
shorter time windows for these issues to occur.
- Updated the Rust edition from 2018 to 2021. (#339, by
@nyurik)
- The MSRV remains at Rust 1.65.
- Changed to use inline format arguments throughout the code, including examples. (#340, by @nyurik)
- Added an example for cascading drop triggered by eviction (#350, by @peter-scholtens)
- Fixed memory leak in
future::Cache
that occurred whenget_with()
,entry().or_insert_with()
, and similar methods were used (#329).- This bug was introduced in
v0.12.0
. Versions prior tov0.12.0
do not have this bug.
- This bug was introduced in
Note
v0.12.0
has major breaking changes on the API and internal behavior.
-
sync
caches are no longer enabled by default: Please use a crate featuresync
to enable it. -
No more background threads: All cache types
future::Cache
,sync::Cache
, andsync::SegmentedCache
no longer spawn background threads.- The
scheduled-thread-pool
crate was removed from the dependency. - Because of this change, many private methods and some public methods under the
future
module were converted toasync
methods. You may need to add.await
to your code for those methods.
- The
-
Immediate notification delivery: The
notification::DeliveryMode
enum for the eviction listener was removed. Now all cache types behave as if theImmediate
delivery mode is specified.
Please read the MIGRATION-GUIDE.md for more details.
- Removed the thread pool from
future
cache (#294) andsync
caches (#316). - Improved async cancellation safety of
future::Cache
. (#309)
- Fixed a bug that an internal
do_insert_with_hash
method gets the currentInstant
too early when eviction listener is enabled. (#322)
- Fixed a bug in
sync::Cache
andsync::SegmentedCache
where memory usage kept increasing when the eviction listener was set with theImmediate
delivery mode. (#295)
Bumped the minimum supported Rust version (MSRV) to 1.65 (Nov 3, 2022). (#275)
- Removed
num_cpus
crate from the dependency. (#277)
- Refactored internal methods of the concurrent hash table to reduce compile times. (#265, by @Swatinem)
- Fixed occasional panic in internal
FrequencySketch
in debug build. (#272)
- Added some example programs to the
examples
directory. (#268, by @peter-scholtens)
- Added support for per-entry expiration (#248):
- In addition to the existing TTL and TTI (time-to-idle) expiration times that
apply to all entries in the cache, the
sync
andfuture
caches can now allow different expiration times for individual entries.
- In addition to the existing TTL and TTI (time-to-idle) expiration times that
apply to all entries in the cache, the
- Added the
remove
method to thesync
andfuture
caches (#255):- Like the
invalidate
method, this method discards any cached value for the key, but returns a clone of the value.
- Like the
- Fixed the caches mutating a deque node through a
NonNull
pointer derived from a shared reference. (#259)
- Removed
unsync
cache that was marked as deprecated in v0.10.0.
Bumped the minimum supported Rust version (MSRV) to 1.60 (Apr 7, 2022). (#252)
- Upgraded
quanta
crate to v0.11.0. (#251)- This resolved "RUSTSEC-2020-0168:
mach
is unmaintained" (#243) by replacingmach
withmach2
. quanta
v0.11.0's MSRV is 1.60, so we also bumped the MSRV of Moka to 1.60.
- This resolved "RUSTSEC-2020-0168:
- Fixed a bug that
future
cache'sblocking().invalidate(key)
method does not trigger the eviction listener. (#242)
- Now
sync
andfuture
caches will not cache anything when the max capacity is set to zero (#230):- Previously, they would cache some entries for short time (< 0.5 secs) even though the max capacity is zero.
- The following caches have been moved to a separate crate called
Mini-Moka:
moka::unsync::Cache
→mini_moka::unsync::Cache
moka::dash::Cache
→mini_moka::sync::Cache
- The following methods have been removed from
sync
andfuture
caches (#199). They were deprecated in v0.8.0:get_or_insert_with
(Useget_with
instead)get_or_try_insert_with
(Usetry_get_with
instead)
- The following methods of
sync
andfuture
caches have been marked as deprecated (#193):get_with_if
(Useentry
API'sor_insert_with_if
instead)
- Add
entry
andentry_by_ref
APIs tosync
andfuture
caches (#193):- They allow users to perform more complex operations on a cache entry. At this
point, the following operations (methods) are provided:
or_default
or_insert
or_insert_with
or_insert_with_if
or_optionally_insert_with
or_try_insert_with
- The above methods return
Entry
type, which providesis_fresh
method to check if the value was freshly computed or already existed in the cache.
- They allow users to perform more complex operations on a cache entry. At this
point, the following operations (methods) are provided:
- Fix an issue that
get_with
method offuture
cache inflates future size by ~7x, sometimes causing stack overflow (#212):- This was caused by a known
rustc
optimization issue on async functions (rust-lang/rust#62958). - Added a workaround to our cache and now it will only inflate the size by ~2.5x.
- This was caused by a known
- Fix a bug that setting the number of segments of
sync
cache will disable notifications. (#207)
- Add examples for
build_with_hasher
method of cache builders. (#216)
- Prevent race condition in
get_with
family methods to avoid evaluatinginit
closure or future multiple times in concurrent calls. (#195)
- Add
optionally_get_with
method tosync
andfuture
caches (#187, by @LMJW):- It is similar to
try_get_with
but takes an init closure/future returning anOption<V>
instead ofResult<V, E>
.
- It is similar to
- Add
by_ref
version of API forget_with
,optionally_get_with
, andtry_get_with
ofsync
andfuture
caches (#190, by @LMJW):- They are similar to the non-
by_ref
versions but take a reference of the key instead of an owned key. If the key does not exist in the cache, the key will be cloned to create new entry in the cache.
- They are similar to the non-
- Fix memory leak after dropping a
sync
orfuture
cache (#177):- This leaked the value part of cache entries.
- Add an experimental
js
feature to makeunsync
andsync
caches to compile forwasm32-unknown-unknown
target (#173, by @aspect):- Note that we have not tested if these caches work correctly in wasm32 environment.
- Add an option to the cache builder of the following caches not to start and use the
global thread pools for housekeeping tasks (#165):
sync::Cache
sync::SegmentedCache
- Ensure that the following caches will drop the value of evicted entries immediately
after eviction (#169):
sync::Cache
sync::SegmentedCache
future::Cache
- Fix segmentation faults in
sync
andfuture
caches under heavy loads on many-core machine (#34):- NOTE: Although this issue was found in our testing environment ten months ago (v0.5.1), no user reported that they had the same issue.
- NOTE: In v0.8.4, we added a mitigation to reduce the chance of the segfaults occurring.
- Upgrade crossbeam-epoch from v0.8.2 to v0.9.9 (#157):
- This will make GitHub Dependabot to stop alerting about a security advisory CVE-2022-23639 for crossbeam-utils versions < 0.8.7.
- Moka v0.9.1 or older was not vulnerable to the CVE:
- Although the older crossbeam-epoch v0.8.2 depends on an affected version of crossbeam-utils, epoch v0.8.2 does not use the affected functions of utils. (#162)
- Relax a too restrictive requirement
Arc<K>: Borrow<Q>
for the key&Q
of thecontains_key
,get
andinvalidate
methods in the following caches (withK
as the key type) (#167). The requirement is nowK: Borrow<Q>
so these methods will accept&[u8]
for the key&Q
when the stored keyK
isVec<u8>
.sync::Cache
sync::SegmentedCache
future::Cache
- Add support for eviction listener to the following caches (#145).
Eviction listener is a callback function that will be called when an entry is
removed from the cache:
sync::Cache
sync::SegmentedCache
future::Cache
- Add a crate feature
sync
for enabling and disablingsync
caches. (#141 by @Milo123459, and #143)- This feature is enabled by default.
- When using experimental
dash
cache, opting out ofsync
will reduce the number of dependencies.
- Add a crate feature
logging
to enable optional log crate dependency. (#159)- Currently log will be emitted only when an eviction listener has panicked.
- Fix a bug caused
invalidate_all
andinvalidate_entries_if
of the following caches will not invalidate entries inserted just before calling them (#155):sync::Cache
sync::SegmentedCache
future::Cache
- Experimental
dash::Cache
- Add basic stats (
entry_count
andweighted_size
) methods to all caches. (#137) - Add
Debug
impl to the following caches (#138):sync::Cache
sync::SegmentedCache
future::Cache
unsync::Cache
- Remove unnecessary
K: Clone
bound from the following caches when they areClone
(#133):sync::Cache
future::Cache
- Experimental
dash::Cache
- Fix the following issue by upgrading Quanta crate to v0.10.0 (#126):
- Quanta v0.9.3 or older may not work correctly on some x86_64 machines where the Time Stamp Counter (TSC) is not synched across the processor cores. (#119)
- For more details about the issue, see the relevant section of the README.
- Add
get_with_if
method to the following caches (#123):sync::Cache
sync::SegmentedCache
future::Cache
The followings are internal changes to improve memory safety in unsafe Rust usages in Moka:
- Remove pointer-to-integer transmute by converting
UnsafeWeakPointer
fromusize
to*mut T
. (#127, by saethlin) - Increase the num segments of the waiters hash table from 16 to 64
(#129) to reduce the chance of the following issue occurring:
- Segfaults under heavy workloads on a many-core machine. (#34)
- Make Quanta crate optional (but enabled by default)
(#121)
- Quanta v0.9.3 or older may not work correctly on some x86_64 machines where the Time Stamp Counter (TSC) is not synched across the processor cores. (#119)
- This issue was fixed by Quanta v0.10.0. You can prevent the issue by upgrading Moka to v0.8.4 or newer.
- For more details about the issue, see the relevant section of the README.
- Add iterator to the following caches: (#114)
sync::Cache
sync::SegmentedCache
future::Cache
unsync::Cache
- Implement
IntoIterator
to the all caches (including experimentaldash::Cache
) (#114)
- Fix the
dash::Cache
iterator not to return expired entries. (#116) - Prevent "index out of bounds" error when
sync::SegmentedCache
was created with a non-power-of-two segments. (#117)
- Add
contains_key
method to check if a key is present without resetting the idle timer or updating the historic popularity estimator. (#107)
As a part of stabilizing the cache API, the following cache methods have been renamed:
get_or_insert_with(K, F)
→get_with(K, F)
get_or_try_insert_with(K, F)
→try_get_with(K, F)
Old methods are still available but marked as deprecated. They will be removed in a future version.
Also policy
method was added to all caches and blocking
method was added to
future::Cache
. They return a Policy
struct or BlockingOp
struct
respectively. Some uncommon cache methods were moved to these structs, and old
methods were removed without deprecating.
Please see #105 for the complete list of the affected methods.
- API stabilization. (Smaller core cache API, shorter names for common methods) (#105)
- Performance related:
- Update the minimum versions of dependencies:
- crossbeam-channel to v0.5.4. (#100)
- scheduled-thread-pool to v0.2.5. (#103, by @Milo123459)
- (dev-dependency) skeptic to v0.13.5. (#104)
- Add a synchronous cache
moka::dash::Cache
, which usesdashmap::DashMap
as the internal storage. (#99) - Add iterator to
moka::dash::Cache
. (#101)
Please note that the above additions are highly experimental and their APIs will be frequently changed in next few releases.
The minimum supported Rust version (MSRV) is now 1.51.0 (Mar 25, 2021).
- Addressed a memory utilization issue that will get worse when keys have hight
cardinality (#72):
- Reduce memory overhead in the internal concurrent hash table (cht). (#79)
- Fix a bug that can create oversized frequency sketch when weigher is set. (#75)
- Change
EntryInfo
fromenum
tostruct
to reduce memory utilization. (#76) - Replace some
std::sync::Arc
usages withtriomphe::Arc
to reduce memory utilization. (#80) - Embed
CacheRegion
value into a 2-bit tag space ofTagNonNull
pointer. (#84)
- Fix a bug that will use wrong (oversized) initial capacity for the internal cht. (#83)
- Add
unstable-debug-counters
feature for testing purpose. (#82)
- Import (include) cht source files for better integration. (#77, #86)
- Improve the CI coverage for Clippy lints and fix some Clippy warnings in unit tests. (#73, by @06chaynes)
- Important Fix: A memory leak issue (#65 below) was found in all previous versions (since v0.1.0) and fixed in this version. All users are encouraged to upgrade to this or newer version.
- Fix a memory leak that will happen when evicting/expiring an entry or manually invalidating an entry. (#65)
- Update the minimum depending version of crossbeam-channel from v0.5.0 to v0.5.2. (#67)
- Breaking change: The type of the
max_capacity
has been changed fromusize
tou64
. This was necessary to have the weight-based cache management consistent across different CPU architectures.
- Add support for weight-based (size aware) cache management. (#24)
- Add support for unbound cache. (#24)
- Fix a bug in
get_or_insert_with
andget_or_try_insert_with
methods offuture::Cache
, which caused a panic if previously inserting task aborted. (#59)
- Remove
Send
and'static
bounds fromget_or_insert_with
andget_or_try_insert_with
methods offuture::Cache
. (#53, by @tinou98)
- Protect overflow when computing expiration. (#56, by @barkanido)
- Fix a bug in
get_or_insert_with
andget_or_try_insert_with
methods offuture::Cache
andsync::Cache
; a panic in theinit
future/closure causes subsequent calls on the same key to get "unreachable code" panics. (#43)
- Change
get_or_try_insert_with
to return a concrete error type rather than a trait object. (#23, #37)
- Restore quanta dependency on some 32-bit platforms such as
armv5te-unknown-linux-musleabi
ormips-unknown-linux-musl
. (#42, by @messense)
- Add support for some 32-bit platforms where
std::sync::atomic::AtomicU64
is not provided. (e.g.armv5te-unknown-linux-musleabi
ormips-unknown-linux-musl
) (#38)- On these platforms, you will need to disable the default features of Moka. See the relevant section of the README.
- Fix a bug in
get_or_insert_with
andget_or_try_insert_with
methods offuture::Cache
by adding missing boundsSend
and'static
to theinit
future. Without this fix, these methods will accept non-Send
or non-'static
future and may cause undefined behavior. (#31) - Fix
usize
overflow on big cache capacity. (#28)
- Add examples for
get_or_insert_with
andget_or_try_insert_with
methods to the docs. (#30)
- Downgrade crossbeam-epoch used in moka-cht from v0.9.x to v0.8.x as a possible workaround for segmentation faults on many-core CPU machines. (#33)
- Replace a dependency cht v0.4 with moka-cht v0.5. (#22)
- Add
get_or_insert_with
andget_or_try_insert_with
methods tosync
andfuture
caches. (#20)
- Breaking change: Now
sync::{Cache, SegmentedCache}
andfuture::Cache
requireSend
,Sync
and'static
for the generic parametersK
(key),V
(value) andS
(hasher state). This is necessary to prevent potential undefined behaviors in applications using single-threaded async runtime such as Actix-rt. (#19)
- Add
invalidate_entries_if
method tosync
,future
andunsync
caches. (#12)
- Stop skeptic from having to be compiled by all downstream users. (#16, by @paolobarbolini)
- Add an unsync cache (
moka::unsync::Cache
) and its builder for single-thread applications. (#9) - Add
invalidate_all
method tosync
,future
andunsync
caches. (#11)
- Fix problems including segfault caused by race conditions between the sync/eviction thread and client writes. (Addressed as a part of #11).
- Add an asynchronous, futures aware cache (
moka::future::Cache
) and its builder. (#7)
- Add thread-safe, highly concurrent in-memory cache implementations
(
moka::sync::{Cache, SegmentedCache}
) with the following features:- Bounded by the maximum number of elements.
- Maintains good hit rate by using entry replacement algorithms inspired by
Caffeine:
- Admission to a cache is controlled by the Least Frequently Used (LFU) policy.
- Eviction from a cache is controlled by the Least Recently Used (LRU) policy.
- Expiration policies:
- Time to live
- Time to idle