Introduction of Dynamic ProgrammingDynamic Programming is the most powerful design technique for solving optimization problems. Divide ∓ Conquer algorithm partition the problem into disjoint subproblems solve the subproblems recursively and then combine their solution to solve the original problems. Dynamic Programming is used when the subproblems are not independent, e.g. when they share the same subproblems. In this case, divide and conquer may do more work than necessary, because it solves the same sub problem multiple times. Dynamic Programming solves each subproblems just once and stores the result in a table so that it can be repeatedly retrieved if needed again. Dynamic Programming is a Bottomup approach we solve all possible small problems and then combine to obtain solutions for bigger problems. Dynamic Programming is a paradigm of algorithm design in which an optimization problem is solved by a combination of achieving subproblem solutions and appearing to the "principle of optimality". Characteristics of Dynamic ProgrammingDynamic Programming works when a problem has the following features:
If a problem has optimal substructure, then we can recursively define an optimal solution. If a problem has overlapping subproblems, then we can improve on a recursive implementation by computing each subproblem only once. If a problem doesn't have optimal substructure, there is no basis for defining a recursive algorithm to find the optimal solutions. If a problem doesn't have overlapping subproblems, we don't have anything to gain by using dynamic programming. If the space of subproblems is enough (i.e. polynomial in the size of the input), dynamic programming can be much more efficient than recursion. Elements of Dynamic ProgrammingThere are basically three elements that characterize a dynamic programming algorithm:
Note: Bottomup means:
