Making the simple complicated is commonplace; making the complicated simple, awesomely simple, that's creativity.
Let us explore the intuitions of dynamic programming and transform our thoughts from "what the hell?" to "oh yeah, duh!" via a 3-step heuristic process. In hindsight, we can "see" the ART of dynamic programming is as easy as 1, 2, 3. π
You rarely have time for everything you want in this life, so you need to make choices. And hopefully your choices can come from a deep sense of who you are.
A significant amount of time and mental resources are necessary to begin understanding dynamic programming. Thus, DP is best approached as a "marathon" (not a "sprint"). Two key building blocks towards a basic understanding of DP are recursion and mathematical induction.
Note: When I mention "recursive stack unwinding" below, what I mean is "frames popping off the recursive call stack." And I imagine a similarity between this action and stepping down the stairs one-by-one, followed by stepping back up the stairs one-by-one in reverse. Stepping down the stairs would be equivalent to the recursive function being invoked as a subroutine of itself. And stepping back up the stairs would be equivalent to the recursive function invocations exiting their subroutines as the base case(s) are reached.
Take the first step in faith. You don't have to see the whole staircase, just take the first step.
A mental leap of faith is necessary to begin post-processing recursion and mathematical induction, ie. we can only understand the basic principles of recursion and recursive algorithms after we have assumed the inductive hypothesis of a recurrence relation is true. After taking this mental leap of faith, we can look back in hindsight with the mind's eye to discover what that actually means, ie. we can clearly "see" the recursive stack hit the base case(s) and begin to unwind, formulating an optimal solution built upon the optimal solutions to subproblems of the original problem itself.
We will never know our full potential unless we push ourselves to find it.
There are two key ingredients to problems which can be solved by a Dynamic Programming algorithm:
Don't only practice your art, but force your way into its secrets. For it and knowledge can raise men to the divine.
ART is an acronym used to intuitively create solutions to DP problems in 3 steps:
- All
- Remember
- Turn
These 3 steps are elaborated upon below, however, let us first take a minute to consider our end-goal, and how we can intuitively reach it. Our end-goal is to minimize or maximize an objective function within the given constraints of an arbitrary universe of discourse (ie. the problem statement).
Ask yourself this question: Is it possible to know the minimum or maximum objective function outcome without considering all possibilities? For example, let's say we have 3 numbers, and we want to know what is the minimum or maximum of those 3 numbers. Is it possible to know the minimum or maximum value without considering all of the numbers? Please take a moment to consider this question before proceeding.
See Answer
The answer is obviously "no." It is not possible to know which of the 3 numbers are minimal or maximal unless we consider all 3 values. Using 3 cards as an example, let's assume we only know the values for the first two cards. Since we have not seen the third card's value, we don't know if it's less-than, equal-to, or greater-than the first two cards, and thus we cannot determine the objective function outcome without considering all of the card values.
All possibilities of a universe of discourse under consideration need to be checked before we can determine the objective function outcome. This realization allows us to begin creating a DP solution via a naive brute-force algorithm ie. an exhaustive search of all possibilities. Therefore, we begin by exploring all possibilites via top-down depth-first-search. Since we know we need to check all possibilities, this gives us key insight towards the N-dimensions of the corresponding DP memo which is used to remember the optimal solutions to each overlapping subproblem. This intuition leads us to step 2, but before we move on to step 2, let us first take another moment to consider the next key question.
Ask yourself this question: Is it possible to determine the objective function outcome without solving overlapping subproblems more than once?
See Answer
The answer is obviously "yes." With the properly structured N-dimensional memo we can store the optimal solutions to overlapping subproblems as they are computed, and then lookup previous solutions upon demand. This is the entire purpose of the DP memo. Simply remember each previous subproblem's optimal solution to avoid re-calculating it. In fact, this is raison d'Γͺtre of dynamic programming, remembering the past to formulate the future, ie. use previous optimal subproblem solutions to formulate current optimal subproblem solutions to formulate the overall optimal solution for the original problem itself.
Remember each previous subproblem's optimal solution to avoid recomputing it. Combined with the previous "top-down brute-force" solution from step 1, we create the "top-down with memo" solution by simply using a memo to store and lookup solutions to previously solved subproblems. Thus a simple if-statement is added to the top of the recursive function to check if a solution to the subproblem is available. If the solution is available, then return it immediately. If the solution is not available, then compute and store the solution once, thus making the solution available for future lookups.
The memo is shaped as an arbitrary N-dimensional data structure such that each N-th dimension corresponds to a specific variable of the universe of discourse. Thus, the size of the N-dimensional data structure directly corresponds to the product of the coalesced variables of all possibilities under consideration. And it follows that the keys which uniquely identify each subproblem solution's storage position within the DP memo are all valid combinations of those specific variables per the constraints of the problem statement. The base case(s) of the recurrence relation are added to the memo first. And as the recursive stack unwinds, the base case(s) are iteratively and optimally built upon. This iterative building upon previous subproblem's optimal solutions from the bottom-up leads us to step 3.
Turn the "top-down with memo" solution upside-down to formulate an explicit bottom-up solution. This step can be challenging because the base case(s) must first be explicitly specified before being iteratively built upon. The previous top-down solutions allow for the base case(s) to be implied by the recursion towards the base case(s) and thus implicitly stored by the memo as each recursive stack "bottoms out" (ie. hits the base case(s) and begins to unwind). To prepare for this implicit to explicit transformation, it can be helpful to print the key and value each time a subproblem's optimal solution is stored in the memo from the "top-down with memo" solution to explicitly identify the base case(s) and to clearly understand how the recursive stack unwinds and thus dictates the iterative building upon the bottom-up recurrence relation. It can also be helpful to print the entire memo when the memo's dimensions can be easily visualized, ie. within 1- or 2-dimensions.
It may be possible to further reduce the bottom-up solution's memory consumption. For example, if we have a 2D matrix for the DP memo, but the current row is only dependent upon the previous row, then we can reduce memory from O(N2) to O(N) by replacing "dp[i]
" with "cur
" (ie. current row) and "dp[i - 1]
" with "pre
" (ie. previous row). Furthermore, if "cur
" is only dependent upon itself, then we can also remove "pre
". See 322. Coin Change and 518. Coin Change 2 as examples of this supplemental memory optimization which reduces the memory by a constant factor of N, ie. we only need N memory instead of 2 * N memory.
Computer, install a recursive algorithm.
-Ensign Harry Kim, Star Trek Voyager, Episode 102
The ART of DP in 3 steps:
- All possibilities are considered via top-down brute-force depth-first-search
- Remember each subproblem's optimal solution via a DP memo
- Turn the top-down solution upside-down to create the bottom-up solution
Seeing is believing.
Below is an oversimplified example which demonstrates the top-down solution's path (highlighted in yellow and green) and the bottom-up solution's path (highlighted in green).
We can "see" the downward staircase of recursive calls highlighted in yellow and the corresponding optimal solutions formulated in reverse as the recursive stack unwinds back up the staircase highlighted in green.
You have to let it all go, Neo - fear, doubt, and disbelief... Free your mind.
Each i
th mental leap of faith from i = 0..N-1
is highlighted in yellow as the recursive function go()
invokes itself as a subroutine until the base case N = 4
is reached, ie. we recursively go()
from here
to there
over and over again. As the recursive stack unwinds (highlighted in green), i
th sub-problem solutions are optimally built upon themselves. And we arrive at the same answer at the End
.
True poets lead no one unawares. It is nothing other than awarness that poets - that is, creators of all sorts - seek. They do not display their art so as to make it appear real; they display the real in a way that reveals it to be art.
The bottom-up solution has two key advantages over the top-down solution:
- No recursive stack overhead
- Memory optimization
It's worth noting that we can perform linear scans from the top-down solutions from i=0..N-1
or in the reverse direction from i=N-1..0
, and likewise we can do the same for the bottom-up solutions. However, since the bottom-up solutions require explicit base cases to be defined to be iteratively built upon, it often makes sense to follow the recursive stack unwinding starting at N-1
because we can explicitly add base case(s) by appending them onto index N
,N+1
,N+2
... to be iteratively built upon. See 712. Minimum ASCII Delete Sum for Two Strings as an example of bottom-up solutions performing linear scans in the opposite and same direction as the recursive stack unwinding, ie. starting from the top-left with empty string base cases at index 0
thus offsetting the dp
matrix by 1
compared to starting from the bottom-right with empty string base cases at index M
,N
thus not needing to offset the dp
matrix, simplifying the code.
It seemed unthinkable for me to leave the world forever before I had produced all that I felt called upon to produce.
- π Base Case(s)
- Where the recursive stack "bottoms out" and begins to unwind
- π― Recurrence Relation Target
- Determine the overall objective function outcome by recursively solving subproblems optimally
- π€ Memo
- Store and retrieve optimal solutions to previously solved subproblems within the N-dimensional memo data structure
- π Seen
- Track which unique keys of the N-dimensional memo data structure have been previously seen
- β
With
- "include this item" concept used for 0-1 knapsack algorithms where we find the optimal solution by either including xor discluding each
i
th item
- "include this item" concept used for 0-1 knapsack algorithms where we find the optimal solution by either including xor discluding each
- π« Without
- "exclude this item" concept used for 0-1 knapsack algorithms where we find the optimal solution by either including xor discluding each
i
th item
- "exclude this item" concept used for 0-1 knapsack algorithms where we find the optimal solution by either including xor discluding each
- β Exit Conditions
- We can exit early under non-optimal conditions (ie. depth-first-search pruning) or for invalid inputs (ie. out-of-bounds)
It is our conviction, based on considerable experience teaching the subject, that the art of formulating and solving problems using dynamic programming can be learned only through active participation by the student. No amount of passive listening to lectures or of reading text material prepares the student to formulate and solve novel problems. The student must first discover, by experience, that proper formulation is not quite as trivial as on his own, he will acquire the feel for the subject that ultimately renders proper formulation easy and natural. For this reason, this book contains a large number of instructional problems, carefully chosen to allow the student to acquire the art that we seek to convey. The student must do these problems on his own. Solutions are given in the back of the book because the reader needs feedback on the correctness of his procedures in order to learn, but any student who reads the solution before seriously attempting the problem does so at his own peril. He will almost certainly regret this passivity when faced with an examination or when confronted with real-world problems. We have seen countless students who have clearly understood and can religiously repeat our lectures on the dynamic programming solution of specific problems fail utterly on midterm examinations asking that the same techniques and ideas be used on similar, but different, problems. Surprised, and humbled, by this experience, the same students often then begin to take seriously our admonition to do homework problems and do not just read the solution and think βof course that is how to do them,β and many have written perfect final exams. Why dynamic programming is more art than science we do not know. We suspect that part of the problem is that students do not expect to be called upon to use common sense in advanced mathematical courses and, furthermore, have very little practice or experience with common sense in this context. But common sense, not mathematical manipulation, is what it takes to be successful in a course in dynamic programming.
-The Art and Theory of Dynamic Programming Dreyfus and Law (1977)
- 5. Longest Palindromic Substring
- 38. Count and Say
- 45. Jump Game II
- 55. Jump Game
- 62. Unique Paths
- 63. Unique Paths II
- 64. Minimum Path Sum
- 70. Climbing Stairs
- 72. Edit Distance
- 91. Decode Ways
- 96. Unique Binary Search Trees
- 97. Interleaving String
- 120. Triangle
- 139. Word Break
- 140. Word Break II
- 198. House Robber
- 213. House Robber II
- 221. Maximal Square
- 256. Paint House
- 264. Ugly Number II
- 265. Paint House II
- 279. Perfect Squares
- 300. Longest Increasing Subsequence
- 309. Best Time to Buy and Sell Stock with Cooldown
- 322. Coin Change
- 337. House Robber III
- 354. Russian Doll Envelopes
- 368. Largest Divisible Subset
- 376. Wiggle Subsequence
- 377. Combination Sum IV
- 416. Partition Equal Subset Sum
- 473. Matchsticks to Square
- 474. Ones and Zeroes
- 486. Predict the Winner
- 494. Target Sum
- 509. Fibonacci Number
- 514. Freedom Trail
- 516. Longest Palindromic Subsequence
- 518. Coin Change 2
- 549. Binary Tree Longest Consecutive Sequence II
- 568. Maximum Vacation Days
- 576. Out of Boundary Paths
- 583. Delete Operation for Two Strings
- 647. Palindromic Substrings
- 650. 2 Keys Keyboard
- 673. Number of Longest Increasing Subsequence
- 712. Minimum ASCII Delete Sum for Two Strings
- 714. Best Time to Buy and Sell Stock with Transaction Fee
- 718. Maximum Length of Repeated Subarray
- 746. Min Cost Climbing Stairs
- 787. Cheapest Flights Within K Stops
- 779. K-th Symbol in Grammar
- 799. Champagne Tower
- 823. Binary Trees With Factors
- 877. Stone Game
- 879. Profitable Schemes
- 931. Minimum Falling Path Sum
- 983. Minimum Cost For Tickets
- 1025. Divisor Game
- 1029. Two City Scheduling
- 1035. Uncrossed Lines
- 1043. Partition Array for Maximum Sum
- 1137. N-th Tribonacci Number
- 1140. Stone Game II
- 1143. Longest Common Subsequence
- 1230. Toss Strange Coins
- 1235. Maximum Profit in Job Scheduling
- 1335. Minimum Difficulty of a Job Schedule
- 1345. Jump Game IV
- 1402. Reducing Dishes
- 1406. Stone Game III
- 1458. Max Dot Product of Two Subsequences
- 1463. Cherry Pickup II
- 1473. Paint House III
- 1510. Stone Game IV
- 1690. Stone Game VII
- 1696. Jump Game VI
- 1751. Maximum Number of Events That Can Be Attended II
- 1770. Maximum Score from Performing Multiplication Operations
- 1871. Jump Game VII
- 1879. Minimum XOR Sum of Two Arrays
- 1884. Egg Drop With 2 Eggs and N Floors
- 2008. Maximum Earnings From Taxi
- 2140. Solving Questions With Brainpower
- 2209. Minimum White Tiles After Covering With Carpets
- 2212. Maximum Points in an Archery Competition
- 2218. Maximum Value of K Coins From Piles
- 2244. Minimum Rounds to Complete All Tasks
- 2247. Maximum Cost of Trip With K Highways
- 2266. Count Number of Texts
- 2267. Check if There Is a Valid Parentheses String Path
- 2291. Maximum Profit From Trading Stocks
- 2304. Minimum Path Cost in a Grid
- 2328. Number of Increasing Paths in a Grid
- 2361. Minimum Costs Using the Train Line
- 2369. Check if There is a Valid Partition For The Array
- 2400. Number of Ways to Reach a Position After Exactly k Steps
- 2431. Maximize Total Tastiness of Purchased Fruits
- 2466. Count Ways To Build Good Strings
- 2556. Disconnect Path in a Binary Matrix by at Most One Flip
- 2581. Count Number of Possible Root Nodes
- 2585. Number of Ways to Earn Points
- 2646. Minimize the Total Price of the Trips
- 2684. Maximum Number of Moves in a Grid
- 2707. Extra Characters in a String
- 2745. Construct the Longest New String
- 2746. Decremental String Concatenation
- 2767. Partition String Into Minimum Beautiful Substrings
- 2770. Maximum Number of Jumps to Reach the Last Index
- 2826. Sorting Three Groups
- 2928. Distribute Candies Among Children I
- 2944. Minimum Number of Coins for Fruits
- 2957. Remove Adjacent Almost-Equal Characters
- 3040. Maximum Number of Operations With the Same Score II
- 3122. Minimum Number of Operations to Satisfy Conditions
We can perform a linear scan tracking the optimal solution ending at a specific index via a dynamic programming technique similar to Kadane's Algorithm.
- 53. Maximum Subarray
- 121. Best Time to Buy and Sell Stock
- 122. Best Time to Buy and Sell Stock II
- 135. Candy
- 152. Maximum Product Subarray
- 329. Longest Increasing Path in a Matrix
- 413. Arithmetic Slices
- 487. Max Consecutive Ones II
- 525. Contiguous Array
- 624. Maximum Distance in Arrays
- 740. Delete and Earn
- 918. Maximum Sum Circular Subarray
- 1014. Best Sightseeing Pair
- 1218. Longest Arithmetic Subsequence of Given Difference
- 1372. Longest ZigZag Path in a Binary Tree
- 1425. Constrained Subsequence Sum
- 1567. Maximum Length of Subarray With Positive Product
- 1746. Maximum Subarray Sum After One Operation
- 2054. Two Best Non-Overlapping Events
- 2110. Number of Smooth Descent Periods of a Stock
- 2262. Total Appeal of A String
- 2327. Number of People Aware of a Secret
- 2393. Count Strictly Increasing Subarrays
- 2414. Length of the Longest Alphabetical Continuous Substring
- 2501. Longest Square Streak in an Array
- 2542. Maximum Subsequence Score
- 2712. Minimum Cost to Make All Characters Equal
- 2830. Maximize the Profit as the Salesman
- 2901. Longest Unequal Adjacent Groups Subsequence II
- 2943. Maximize Area of Square Hole in Grid
Remember, concentrate on the moment. Feel, don't think. Trust your instincts.
- 42. Trapping Rain Water
- 304. Range Sum Query 2D - Immutable
- 307. Range Sum Query - Mutable
- 361. Bomb Enemy
- 560. Subarray Sum Equals K
- 562. Longest Line of Consecutive One in Matrix
- 646. Maximum Length of Pair Chain
- 764. Largest Plus Sign
- 838. Push Dominoes
- 849. Maximize Distance to Closest Person
- 923. 3Sum With Multiplicity
- 926. Flip String to Monotone Increasing
- 1139. Largest 1-Bordered Square
- 1182. Shortest Distance to Target Color
- 1277. Count Square Submatrices with All Ones
- 1314. Matrix Block Sum
- 1339. Maximum Product of Splitted Binary Tree
- 1493. Longest Subarray of 1's After Deleting One Element
- 1546. Maximum Number of Non-Overlapping Subarrays With Sum Equals Target
- 1895. Largest Magic Square
- 1991. Find the Middle Index in Array
- 2049. Count Nodes With the Highest Score
- 2055. Plates Between Candles
- 2062. Count Vowel Substrings of a String
- 2090. K Radius Subarray Averages
- 2100. Find Good Days to Rob the Bank
- 2132. Stamping the Grid
- 2155. All Divisions With the Highest Score of a Binary Array
- 2163. Minimum Difference in Sums After Removal of Elements
- 2167. Minimum Time to Remove All Cars Containing Illegal Goods
- 2219. Maximum Sum Score of Array
- 2222. Number of Ways to Select Buildings
- 2247. Maximum Cost of Trip With K Highways
- 2266. Count Number of Texts
- 2337. Move Pieces to Obtain a String
- 2381. Shifting Letters II
- 2420. Find All Good Indices
- 2483. Minimum Penalty for a Shop
- 2485. Find the Pivot Integer
- 2574. Left and Right Sum Differences
- 2587. Rearrange Array to Maximize Prefix Score
- 2602. Minimum Operations to Make All Array Elements Equal
- 2609. Find the Longest Balanced Substring of a Binary String
- 2659. Make Array Empty
- 2680. Maximum OR
- 2711. Difference of Number of Distinct Values on Diagonals
- 2780. Minimum Index of a Valid Split
- 2971. Find Polygon With the Largest Perimeter
- 3070. Count Submatrices with Top-Left Element and Sum Less Than k
- 3096. Minimum Levels to Gain More Points
- 3128. Right Triangles
- 3147. Taking Maximum Energy From the Mystic Dungeon
- 3148. Maximum Difference Score in a Grid
Your eyes deceive you, don't trust them.
- 118. Pascal's Triangle
- 119. Pascal's Triangle II
- 207. Course Schedule
- 338. Counting Bits
- 523. Continuous Subarray Sum
- 659. Split Array into Consecutive Subsequences
- 974. Subarray Sums Divisible by K
- 1027. Longest Arithmetic Subsequence
- 1048. Longest String Chain
- 1220. Count Vowels Permutation
- 1262. Greatest Sum Divisible by Three
- 1759. Count Number of Homogenous Substrings
- 1858. Longest Word With All Prefixes
- 2121. Intervals Between Identical Elements
- 2364. Count Number of Bad Pairs
- 2370. Longest Ideal Subsequence
- 2579. Count Total Number of Colored Cells
- 2588. Count the Number of Beautiful Subarrays
- 2615. Sum of Distances
- 2713. Maximum Strictly Increasing Cells in a Matrix
- 2786. Visit Array Positions to Maximize Score
- 2870. Minimum Number of Operations to Make Array Empty
- 2959. Number of Possible Sets of Closing Branches
You have only begun to discover your power.
The most difficult part of dynamic programming is acquiring an understanding of a problem's subproblems and recurrence relations. It's amazing how difficult it can be to formulate DP solutions de novo with foresight conjured upon demand compared to validating existing DP solutions in hindsight. To develop a DP solution from scratch requires the creation of historical state variables and a series of states from which we consider all possibilities we can optimally choose from via recurrence relations which optimally build current subproblem solutions upon previous optimal subproblem solutions, ie. it is easier said than done.
If you strike me down, I shall become more powerful than you can possibly imagine.
- Algorithms Illuminated
- Competitive Programmer's Core Skills by Saint Petersburg State University
- Algorithms by Stanford University
- Algorithms and Data Structures by UC San Diego
- Algorithms for DNA Sequencing
- Master Theorem: Determine the asymptotic bound of recursive algorithms via standard recurrences
- Towers of Hanoi