DIVIDE AND CONQUER VS DYNAMIC PROGRAMMING


The huge rise in computing power over the last few decades has greatly improved our capacity to solve complicated problems and evaluate their effectiveness in a variety of scientific and technical fields. These strategies have become more profitable to make selections as a result of recent advancements in the field of optimization. Divide and conquer algorithms are well-suited to current parallel computers because they contain a lot of inherent parallelism and perform well with caches and deep memory hierarchies. Dynamic Programming is a strong tool for producing classic algorithms for a range of optimization problems. It is one of the elegant algorithm design standards. This blog explains the differences/similarities between both the approaches with the help of two examples: Fibonacci series and binary search. 



Table of Contents : 


Divide and Conquer

Divide and Conquer Algorithm steps:

When & Where to use Divide and Conquer Algorithm?

Divide and Conquer Applications

Time Complexity

Dynamic Programming

Dynamic Programming Approach steps

When & Where to use Dynamic Programming Approach?

Dynamic Programming Applications

Dynamic Programming and Divide-and-Conquer Similarities

Difference Between DP and DC with examples

Conclusion




Divide and Conquer

Divide the problem into a number of subproblems, each of which is a smaller version of the original. Conquer the subproblems by recursively solving them. However, if the subproblem sizes are small enough, the subproblems can be solved in a straightforward manner. To solve the original problem, combine the solutions to the subproblems.





Divide and Conquer Algorithm contains the following steps:

  • Divide: This involves dividing the problem into smaller subproblems.
  • Conquer: Solving the smaller subproblems recursively.
  • Combine: Combine the solutions of the sub-problems that are part of the recursive process to solve the actual problem.


When & Where to use Divide and Conquer Algorithm:

If we have a problem that is similar to the divide and conquer algorithm, this algorithm will be useful. We can use this algorithm to divide the problem into sub-problems, but we must ensure that the same sub-problem is not solved multiple times. If you solve the same sub-problem several times, you should know that this problem will be solved using a dynamic approach rather than the Divide and Conquer Algorithm.



Divide and Conquer Applications:

  • Binary Search

  • Merge Sort

  • Quick Sort

  • Strassen’s Matrix multiplication



Time Complexity



Now let's take an example by finding the time complexity of a recursive problem















This algorithm can also significantly reduce the  complexity of time. 

 For example, bubble sort uses the complexity of O(n^2), while quicksort (a divide-and-conquer application) reduces the time complexity to 

O(nlog(n)). The time complexity of a linear search is O(n), but the binary search (divide-and-conquer application) reduces the time complexity to O(log(n)).




Dynamic Programming

Like the divide-and-conquer method, dynamic programming solves problems by combining the solutions to subproblems. A dynamic-programming algorithm solves each subproblem once and saves the result in a table, avoiding the need to recalculate the answer each time it solves a subproblem.





Dynamic Programming Algorithm steps:

  • Define a class of subproblems
  • Give a recurrence based on solving each subproblem in terms of simpler subproblems 
  • Give an algorithm for computing the recurrence.



When & Where to use Dynamic Programming:

Dynamic Programming is a technique for resolving problems with a specific structure (optimal substructure), in which a problem can be broken down into subproblems that are similar to the original. Clearly, recursion can be used to solve a DP.

However, it is not necessary.

Without recursion, a DP can be solved. The first and often most difficult step in solving a problem with DP is recognizing that it can be solved. You should consider whether your solution to your problem can be expressed as a function of solutions to similar smaller problems.


Dynamic Programming Applications:


  • Fibonacci number series

  • Knapsack problem

  • Tower of Hanoi

  • All pair shortest path by Floyd-Warshall

  • Shortest path by Dijkstra

  • Project scheduling



Dynamic Programming and Divide-and-Conquer Similarities

Both approaches can't be treated as if they were something entirely different. Because they both function by recursively breaking a problem into two or more sub-problems of the same or related type until they become simple enough to be addressed directly. The sub-problems' solutions are then integrated to produce a solution to the original problem.

So, why do we still have distinct paradigm names, and why to refer to dynamic programming as an extension? It's because a dynamic programming approach can only be used to solve a problem with certain constraints or requirements. After that, dynamic programming adds a memoization or tabulation mechanism to the divide and conquer strategy.



Dynamic Programming: Prerequisites/Restrictions

As we’ve just discovered above are two main attributes that divide and conquer problem must have in order for dynamic programming to be applicable:
  1. Optimal substructure — optimal solution can be constructed from optimal solutions of its subproblems

  2. Overlapping subproblems — A problem can be broken down into multiple subproblems which are reused several times or a recursive algorithm for the problem solves the same subproblem over and over rather than always generating new subproblems

Once these two conditions are fulfilled we can say that a divide and conquer problem may be solved using a dynamic programming approach.



Dynamic Programming: Extension for Divide and Conquer

The divide and conquer approach is extended with two techniques (memoization and tabulation), both of which have the goal of storing and reusing sub-problem solutions that can dramatically enhance speed. The naïve recursive implementation of the Fibonacci function, for example, has a time complexity of O(2^n), whereas the DP approach takes just O(n) time.


Memoization (top-down cache filling) refers to the technique of caching and reusing previously computed results. As a result, the memoized fib function might look like this:

 





Tabulation (bottom-up cache filling) is similar, but concentrates on filling the cache entries. It is easiest to compute the cache values iteratively. This is how the tabulated version of fib might look:



So What the Difference Between DP and DC After All

The above mentioned DP prerequisites and its methodologies can be put into a picture as below


 

Let us understand the difference by using a Fibonacci Number problem.




Example

Problem Description: Find nth Fibonacci Number. Any term in Fibonacci is the sum of the preceding two numbers. We will discuss two approaches


  • Divide and Conquer
  • Dynamic Programming


Solution 1 - Divide and Conquer


In this, we divide it down to two subproblems to calculate (n-1)th and (n-2)th Fibonacci numbers and now we add(combine) these results to get our nth Fibonacci number.


Pseudo Code



Analysis

The recurrence relation for the above solution is


T(n) = T(n-1) + T(n-2) + o(1)

Time Complexity: O( 2^(n/2))



Solution 2 - Dynamic Programming


We will use the same relation, but we will now store the results of the problem that we solved.


For Example, fib(3) will be calculated for both fib(4) and fib(5). So, we can memorize these result in an array. The idea is to store the fib(n) as it is calculated in a table


Pseudo Code


Analysis

For every i, belongs to [1,n], we will calculate fib(i) once. So, recurrence relation for the above is


              T(n) = T(n-1) + o(1)

Time Complexity: O(n)


The key takeaway here is that because our divide and conquer problem contains overlapping subproblems, caching of sub-problem solutions becomes practical, allowing memoization and tabulation to enter into the picture.


Now, let us take an example of Binary Search.

A half-interval search, often known as a binary search problem, is a search technique that locates a target value within a sorted array. If the target value and the array's middle element are uneven, the half in which the target cannot lie is deleted, and the search proceeds on the remaining half until the target value is located. The target is not in the array if the remaining half of the search is empty. The Divide-and-Conquer principle is derived from the preceding concept.

 

Pseudo Code



 

Here you can clearly see the principles of divide and conquer to solve the problem. The original array is iteratively divided into sub-arrays and the required elements are found in it.

Now, is it still possible to apply dynamic programming?

The answer is no. Since there are no overlapping subproblems, we can't utilize DP to solve it. We divide the array into entirely independent sections each time. As a result, the sub-problem solutions cannot be reused elsewhere.


All the dissimilar points between the two approaches have been summed in the table below:


Divide and Conquer

Dynamic Programming

Divide and conquer is the top down approach.

Dynamic programming is a bottom up approach.

Divide and conquer prefers recursion.

Dynamic programming prefers iteration.

In divide and conquer, sub problems are independent.

Sub problems of dynamic programming are dependent and overlapping.

Solutions of sub problems are not stored.

Solutions of sub problems are stored in the table.

Lots of repetition of work.

No repetition at work at all.

Less efficient due to rework.

More efficient due to the saving of solutions.

Solution using divide and conquer is simple.

Sometimes, a solution using dynamic programming is complex and tricky.

Only one decision sequence is generated.

Many decision sequences may be generated.



Conclusion:


The primary difference between divide and conquer and dynamic programming is that divide and conquer combines the subproblem solutions to gain the answer to the main problem, whereas dynamic programming uses the subproblem results to discover the best solution to the main problem. Both are different paradigm to approach a problem, and both give better result in different cases.



Comments

Post a Comment