Skip to content

Commit

Permalink
Significantly improved Javadocs
Browse files Browse the repository at this point in the history
  • Loading branch information
jdereg committed Jun 23, 2024
1 parent 58948cc commit 9adab4a
Show file tree
Hide file tree
Showing 3 changed files with 65 additions and 9 deletions.
45 changes: 45 additions & 0 deletions src/main/java/com/cedarsoftware/util/LRUCache.java
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,51 @@
import com.cedarsoftware.util.cache.LockingLRUCacheStrategy;
import com.cedarsoftware.util.cache.ThreadedLRUCacheStrategy;

/**
* This class provides a thread-safe Least Recently Used (LRU) cache API that will evict the least recently used items,
* once a threshold is met. It implements the <code>Map</code> interface for convenience.
* <p>
* This class provides two implementation strategies: a locking approach and a threaded approach.
* <ul>
* <li>The Locking strategy can be selected by using the constructor that takes only an int for capacity, or by using
* the constructor that takes an int and a StrategyType enum (StrategyType.LOCKING).</li>
* <li>The Threaded strategy can be selected by using the constructor that takes an int and a StrategyType enum
* (StrategyType.THREADED). Additionally, there is a constructor that takes a capacity, a cleanup delay time, a
* ScheduledExecutorService, and a ForkJoinPool, which also selects the threaded strategy.</li>
* </ul>
* <p>
* The Locking strategy allows for O(1) access for get(), put(), and remove(). For put(), remove(), and many other
* methods, a write-lock is obtained. For get(), it attempts to lock but does not lock unless it can obtain it right away.
* This 'try-lock' approach ensures that the get() API is never blocking, but it also means that the LRU order is not
* perfectly maintained under heavy load.
* <p>
* The Threaded strategy allows for O(1) access for get(), put(), and remove() without blocking. It uses a <code>ConcurrentHashMap</code>
* internally. To ensure that the capacity is honored, whenever put() is called, a thread (from a thread pool) is tasked
* with cleaning up items above the capacity threshold. This means that the cache may temporarily exceed its capacity, but
* it will soon be trimmed back to the capacity limit by the scheduled thread.
* <p>
* LRUCache supports <code>null</code> for both key or value.
* <p>
* @see LockingLRUCacheStrategy
* @see ThreadedLRUCacheStrategy
* @see LRUCache.StrategyType
* <p>
* @author John DeRegnaucourt ([email protected])
* <br>
* Copyright (c) Cedar Software LLC
* <br><br>
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
* <br><br>
* <a href="http://www.apache.org/licenses/LICENSE-2.0">License</a>
* <br><br>
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
public class LRUCache<K, V> implements Map<K, V> {
private final Map<K, V> strategy;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,16 @@

/**
* This class provides a thread-safe Least Recently Used (LRU) cache API that will evict the least recently used items,
* once a threshold is met. It implements the Map interface for convenience.
* once a threshold is met. It implements the <code>Map</code> interface for convenience.
* <p>
* LRUCache supports null for key or value.
* The Locking strategy allows for O(1) access for get(), put(), and remove(). For put(), remove(), and many other
* methods, a write-lock is obtained. For get(), it attempts to lock but does not lock unless it can obtain it right away.
* This 'try-lock' approach ensures that the get() API is never blocking, but it also means that the LRU order is not
* perfectly maintained under heavy load.
* <p>
* LRUCache supports <code>null</code> for both key or value.
* <p>
* <b>Special Thanks:</b> This implementation was inspired by insights and suggestions from Ben Manes.
* <p>
* @author John DeRegnaucourt ([email protected])
* <br>
Expand Down Expand Up @@ -89,6 +96,8 @@ public V get(Object key) {
if (node == null) {
return null;
}

// Ben Manes suggestion - use exclusive 'try-lock'
if (lock.tryLock()) {
try {
moveToHead(node);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,16 +16,18 @@
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;

import com.cedarsoftware.util.LRUCache;

/**
* This class provides a thread-safe Least Recently Used (LRU) cache API that will evict the least recently used items,
* once a threshold is met. It implements the Map interface for convenience.
* once a threshold is met. It implements the <code>Map</code> interface for convenience.
* <p>
* LRUCache is thread-safe via usage of ConcurrentHashMap for internal storage. The .get(), .remove(), and .put() APIs
* operate in O(1) without blocking. When .put() is called, a background cleanup task is scheduled to ensure
* {@code cache.size <= capacity}. This maintains cache size to capacity, even during bursty loads. It is not immediate;
* the LRUCache can exceed the capacity during a rapid load; however, it will quickly reduce to max capacity.
* The Threaded strategy allows for O(1) access for get(), put(), and remove() without blocking. It uses a <code>ConcurrentHashMap</code>
* internally. To ensure that the capacity is honored, whenever put() is called, a thread (from a thread pool) is tasked
* with cleaning up items above the capacity threshold. This means that the cache may temporarily exceed its capacity, but
* it will soon be trimmed back to the capacity limit by the scheduled thread.
* <p>
* LRUCache supports null for key or value.
* LRUCache supports <code>null</code> for both key or value.
* <p>
* @author John DeRegnaucourt ([email protected])
* <br>
Expand Down Expand Up @@ -73,7 +75,7 @@ void updateTimestamp() {
* Create a LRUCache with the maximum capacity of 'capacity.' Note, the LRUCache could temporarily exceed the
* capacity; however, it will quickly reduce to that amount. This time is configurable via the cleanupDelay
* parameter and custom scheduler and executor services.
*
*
* @param capacity int maximum size for the LRU cache.
* @param cleanupDelayMillis int milliseconds before scheduling a cleanup (reduction to capacity if the cache currently
* exceeds it).
Expand Down

0 comments on commit 9adab4a

Please sign in to comment.