🎯Dynamic Programming Roadmap
Master the fundamentals of Dynamic Programming with this carefully structured roadmap!
🧠Why do we need Dynamic Programming?
When solving problems, we often run into the same subproblems over and over again.
✅Think of a naive recursive solution:
It breaks a problem into smaller subproblems.
But if the same subproblem appears multiple times, you waste time recalculating it again and again.
✅Dynamic Programming fixes this by:
Recognizing overlapping subproblems
Stores their solutions so you can reuse them
🚀Dynamic Programming = Recursion + Memory
✅Example: Imagine you’re cooking 10 dishes: 7 of them need chopped onions. 🧅
Without DP (Naive Recursion)
Every time a dish needs onions, you chop them from scratch.
You’re doing the same task repeatedly.
With DP
You chop the onions once and store them in a bowl.
Whenever another dish needs onions, you just reuse them.
No repeated effort!
📊Techniques to solve Dynamic Programming
1. Memoization (Top-Down, Recursive approach)
Solve the big problem first, and store answers as you go
If a smaller subproblem is already solved, you reuse it.
Otherwise, you solve it and store the result for future use.
How it works:
ways(5)
/ \
(4) (3)
/ \ / \
(3)(2)(2)(1)
2. Tabulation (Bottom-Up, Iterative approach)
Build all answers from the smallest pieces upward
Keep storing the answers and use them to calculate answers for bigger problems
Avoid recursion entirely
Initial State (Base cases known)
+-------+--------+--------+--------+--------+--------+
| Step | 1 | 2 | 3 | 4 | 5 |
+-------+--------+--------+--------+--------+--------+
| Ways | dp(1)=1| dp(2)=2| | | |
+-------+--------+--------+--------+--------+--------+
Next State: dp(3) = dp(2) + dp(1) = 2 + 1 = 3
+-------+--------+--------+--------+--------+--------+
| Step | 1 | 2 | 3 | 4 | 5 |
+-------+--------+--------+--------+--------+--------+
| Ways | dp(1)=1| dp(2)=2| dp(3)=3| | |
+-------+--------+--------+--------+--------+--------+
Continue with all cells of the table...
🧠When to use Dynamic Programming?
┌────────────────────────┐
│ Problem: Can you │
│ break it into smaller │
│ subproblems? │
└───────────┬────────────┘
│Yes
▼
┌──────────────────────────────┐
│ Do subproblems repeat? │
└───────────┬──────────────────┘
│Yes
▼
┌────────────────────────────────------┐
│ Choose DP approach: │
│ │
│- Natural recursion? Small input size?|
│ → MEMOIZATION (Top-Down) │
│ │
│- Deep recursion? Large input size? │
│ → TABULATION (Bottom-Up) │
└────────────────────────────────------┘
🧠The Problem-Solving Process
Step 1: Define State What information do I need to track to solve subproblems? This is often the hardest part.
Step 2: Find Recurrence How do I build the solution from smaller pieces? Write this out mathematically first.
Step 3: Handle Base Cases What are the simplest subproblems I can solve directly?
Step 4: Choose Implementation
Top-down (memoization): More intuitive, easier to code
Bottom-up (tabulation): Often more efficient, better for optimization
🧠Leetcode Problems for Dynamic Programming
The following are the funcdamental underlying patterns of DP, and the common FAANG leetcode questions for each of the patterns:
Pattern 1: Linear Sequential DP
Core: Making optimal decisions at each position in a sequence, where each decision affects future possibilities
Problem List:
State: dp[i]
= optimal result considering elements up to index i
Transition: dp[i] = optimal_choice(dp[i-1], dp[i-2], ...)
based on current element and constraints
Pattern 2: Subarray Optimization DP
Core: Finding optimal contiguous subarrays by deciding whether to extend or restart at each position
Problem List:
State: dp[i]
= optimal subarray ending at index i
Transition: dp[i] = max(nums[i], dp[i-1] + nums[i])
or similar extend-vs-restart logic
Pattern 3: Unbounded Resource DP (Knapsack Variants)
Core: Making unlimited choices from a set to achieve a target, optimizing count or ways
Problem List:
State: dp[amount]
= optimal result for target amount
Transition: dp[amount] = optimal_over_all_choices(dp[amount - choice[i]])
for each available choice
Pattern 4: Single String Processing DP
Core: Processing a string by making decisions about prefixes/suffixes or substrings
Problem List:
State: dp[i]
= optimal result for string/array up to index i, or dp[i][j]
for substring from i to j
Transition: Depends on current character and previously computed results for smaller strings/subsequences
Pattern 5: Grid Traversal DP
Core: Finding optimal paths through a 2D grid with movement constraints
Problem List:
State: dp[i][j]
= optimal result reaching cell (i,j)
Transition: dp[i][j] = function(dp[i-1][j], dp[i][j-1])
based on allowed movements
Pattern 6: Two-Sequence DP
Core: Comparing or aligning two sequences to find optimal matching/transformation
Problem List:
State: dp[i][j]
= optimal result considering first i characters of sequence1 and first j characters of sequence2
Transition: dp[i][j] = optimal_choice(dp[i-1][j], dp[i][j-1], dp[i-1][j-1])
based on character match/mismatch.
🚀 Final Thoughts
Dynamic Programming is a skill that gets easier and more intuitive with practice. Don’t be disheartened if it feels slow at first; with repetition, the patterns will become second nature!
Thanks for reading!
If you found this helpful, feel free to connect with me on: