How to Optimally Handle Recursion?
What if you need to find the minimum number of coins that make up an amount with coins of different denominations? For example, if you are given 1 cent, 5 cents, and 10 cents and asked to use the minimum number of coins to make 18 cents.
The example of finding minimum coins is an optimization problem where you need to find the minimum number of coins that make up an amount. The solution needs to recursively check if the total can be reached by choosing the coin or not for every denomination.
Whenever you have recursion, there is an overhead of repeatedly making the same function calls which can result into an exponential or logarathmic time complexity problem depending on the total number of recursive calls and the complexity of operations for each recursive call. As a result the solution takes time to execute.
Is there a way to reduce the execution time and run the function optimally?
One of the ways to solve the time complexity problem where we need to find an optimal solution is to break the recursive problem into simpler sub-problems using Dynamic programming.
Dynamic programming is a technique to find an optimal solution to a complex recursive problem by breaking it down into simpler sub-problems and solving those sub-problems only once by storing the results of those sub-problems.
It is a mathematical optimization technique and a computer programming method.
Which problems can I solve using Dynamic Programming?
Dynamic programming needs to have two essential properties for its application.
- Optimal substructure: The optimal solution to the problem can be calculated from the optimal solution of its subproblems that implements the principle of optimality. A problem is broken down into smaller subproblems, a scaled-down version of the original problem.
- Overlapping sub-problems: As part of the recursive process, we solve the same subproblem more than once.
Finding the Fibonacci of 6 is shown below as an example for Dynamic programming.