7. Dynamic Programming
Recursion
DP
Dynamic Programming (DP)
- All about remembering the past.
Optimal Substructure
- Can express bigger problem in small sub-problems of the same type (Recurrence Relation).
- This property is also available in Divide and Conquer, all recursion problems.
Overlapping Subproblem
- This property is the giveaway that the given problem is a DP problem.
- states
- #states are is basically DP array size.
- Transitions
- Transitions means number of recursive calls.
- what does dp(states) represent?
- Base Cases: Basically when/where does your transition equation fails?
Estimated
Time complexity of DP problems =#states * (1 + Average cost of transitions)
Exact
Time complexity of DP problems =Total transition time for all states.
- Space complexity of DP problems =
#states * space required for each state
- Cost of Transition = extra time taken besides recusive calls.
- ⭐ In DP problems, we need to think/practise finding states(both form and constraints) and transitions needed to solve the problem.
Recursive | Iterative |
---|---|
Slower | Faster |
No need to care about the flow | Important to calculate states in a way that current state can be derived from previously calculated state |
⭐ Doesn't evaluate unnecessary states | All states are evaluated. |
cannot apply many optimizations | can apply many optimizations |
Some Initialization Points
// In global scope
vector<int> dp;
int n;
void solve (){
// resetting dp array for every test case
dp.clear();
dp.resize(n);
dp.assign(n,0); // or dp = vector<int>(n,0);
}
int main(){
int t; cin>>t; while(t--) solve();
}
General structure of DP functions
int dp[N][M]; // dp array
// function with state and constraints
// In knapsack,
// form state: the index we are currently working in our problem.
// constraint state: W, we cannot go above bags Weight.
function rec (<form state(s)>, <constraint state(s)>){
// Pruning cases
// Base cases
// cache check
// recursive code and computation
// save and return
}