Data Structures and Algorithms B
  • Introduction
  • Stable Marrage
  • Huffman Codes
    • ABL
    • Implementation
  • Graph Algorithms
    • Connect Component
    • Bipartiteness
    • Strongly Connected Components
      • Implementation
    • Topological Sort
      • Implementation 1
      • Implementation 2
    • Dijkstra’s Shortest Path
    • Minimum Spanning Tree
      • Prims
        • Implementation 1
      • Kruskels
        • Implementation 1
      • Traveling Salesman Problem
    • k-Clustering
    • Dynamic Programming with Trees
      • Implementation 1
      • Implementation 2
    • Disjoint Sets
      • Implementation
    • Eularian Cycle
      • Implementation
    • Hamiltonian Path
      • Implementation
  • Divide and Conquer
    • Merge Sort
    • Integer Multiplication
    • Closest Pair
    • Master Method
  • Dynamic Programming
    • Interval Scheduling
    • Knapsack Problem
  • Network Flow
    • Maximum Network Flow
    • Improvements
    • Image Segmentation
  • String Algorithms
    • Z Algorithm
    • Boyer-Moore
    • Knuth-Morris-Pratt
    • Suffix Trees
      • Naive Implementation
      • Ukkonens Algorithm
        • Implementation
      • Applications
        • Longest Common Substring
        • Longest Palindromic Substring
        • Longest Repeated Substring
  • Randomized Algorithms
    • Bloom Filters
      • Implementation
Powered by GitBook
On this page

Was this helpful?

Dynamic Programming

Dynamic Programming refers to the idea of breaking down a problem into small sub problems and then have each sub problem solve its most optimal case.

We can apply the dynamic programming paradigm to a problem if the problem has optimal substructure.

  • For example: anything that has a predictable sequence can be solved efficiently with dynamic programming.

    • General summations, factorial, or the Fibonacci sequence, for example - can all be solved using this method. Often it is the natural way humans try and solve summations for example. Rather than doing 1 + 2 + 3 + 4 to find the 4th element we do 1 + 2 = 3; (3 + 3) = 6; 6 + 4 = 10.

    • This is why many recursive (self-referential) algorithms have a dynamic programming aspect.

In other words, there exists a way to start with a very small base case, and build up towards a solution. This is know as the bottom-up approach, and it can often help us discover whether or not we can use DP.

With bottom-up, optimum substructure essentially means discovering that the brute force approach contains a recursive tree with overlapping subproblems.

The dimensionality of the parameters often tells us dimensionality of our hash table. For instance, the knapsack problem associates the relationship between (w, v) -> (weights, and values). It would then be reasonable to associate all permutations and equivalences between (w,v), which is precisely an n^2 table. In another case, the coin change problem is only interested with finding the optimal case for a single parameter - and it should hence be dependent on the 1-dimensional array.

PreviousMaster MethodNextInterval Scheduling

Last updated 5 years ago

Was this helpful?