Skip to content

Commit

Permalink
minor javadoc updates
Browse files Browse the repository at this point in the history
  • Loading branch information
jdereg committed Jun 24, 2024
1 parent 0863337 commit c87e830
Show file tree
Hide file tree
Showing 3 changed files with 10 additions and 10 deletions.
12 changes: 6 additions & 6 deletions src/main/java/com/cedarsoftware/util/LRUCache.java
Original file line number Diff line number Diff line change
Expand Up @@ -9,16 +9,16 @@
import com.cedarsoftware.util.cache.ThreadedLRUCacheStrategy;

/**
* This class provides a thread-safe Least Recently Used (LRU) cache API that will evict the least recently used items,
* once a threshold is met. It implements the <code>Map</code> interface for convenience.
* This class provides a thread-safe Least Recently Used (LRU) cache API that evicts the least recently used items once
* a threshold is met. It implements the <code>Map</code> interface for convenience.
* <p>
* This class provides two implementation strategies: a locking approach and a threaded approach.
* This class offers two implementation strategies: a locking approach and a threaded approach.
* <ul>
* <li>The Locking strategy can be selected by using the constructor that takes only an int for capacity, or by using
* the constructor that takes an int and a StrategyType enum (StrategyType.LOCKING).</li>
* <li>The Threaded strategy can be selected by using the constructor that takes an int and a StrategyType enum
* (StrategyType.THREADED). Additionally, there is a constructor that takes a capacity, a cleanup delay time, a
* ScheduledExecutorService, and a ForkJoinPool, which also selects the threaded strategy.</li>
* (StrategyType.THREADED). Additionally, there is a constructor that takes a capacity, a cleanup delay time,
* and a ScheduledExecutorService.</li>
* </ul>
* <p>
* The Locking strategy allows for O(1) access for get(), put(), and remove(). For put(), remove(), and many other
Expand All @@ -31,7 +31,7 @@
* with cleaning up items above the capacity threshold. This means that the cache may temporarily exceed its capacity, but
* it will soon be trimmed back to the capacity limit by the scheduled thread.
* <p>
* LRUCache supports <code>null</code> for both <b>key</b> or <b>value</b>.
* LRUCache supports <code>null</code> for both <b>key</b> and <b>value</b>.
* <p>
* <b>Special Thanks:</b> This implementation was inspired by insights and suggestions from Ben Manes.
* @see LockingLRUCacheStrategy
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,15 @@
import java.util.concurrent.locks.ReentrantLock;

/**
* This class provides a thread-safe Least Recently Used (LRU) cache API that will evict the least recently used items,
* This class provides a thread-safe Least Recently Used (LRU) cache API that evicts the least recently used items
* once a threshold is met. It implements the <code>Map</code> interface for convenience.
* <p>
* The Locking strategy allows for O(1) access for get(), put(), and remove(). For put(), remove(), and many other
* methods, a write-lock is obtained. For get(), it attempts to lock but does not lock unless it can obtain it right away.
* This 'try-lock' approach ensures that the get() API is never blocking, but it also means that the LRU order is not
* perfectly maintained under heavy load.
* <p>
* LRUCache supports <code>null</code> for both key or value.
* LRUCache supports <code>null</code> for both key and value.
* @author John DeRegnaucourt ([email protected])
* <br>
* Copyright (c) Cedar Software LLC
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,15 @@
import java.util.concurrent.atomic.AtomicBoolean;

/**
* This class provides a thread-safe Least Recently Used (LRU) cache API that will evict the least recently used items,
* This class provides a thread-safe Least Recently Used (LRU) cache API that evicts the least recently used items
* once a threshold is met. It implements the <code>Map</code> interface for convenience.
* <p>
* The Threaded strategy allows for O(1) access for get(), put(), and remove() without blocking. It uses a <code>ConcurrentHashMap</code>
* internally. To ensure that the capacity is honored, whenever put() is called, a thread (from a thread pool) is tasked
* with cleaning up items above the capacity threshold. This means that the cache may temporarily exceed its capacity, but
* it will soon be trimmed back to the capacity limit by the scheduled thread.
* <p>
* LRUCache supports <code>null</code> for both key or value.
* LRUCache supports <code>null</code> for both key and value.
* <p>
* @author John DeRegnaucourt ([email protected])
* <br>
Expand Down

0 comments on commit c87e830

Please sign in to comment.