Unit 5: Dynamic Programming and Randomized Algorithms

Prog
Disc
LJ

Intro

  • It is a method used to optimize the algorithm performance by storing intermediate data
    • The process of storing intermediate data is called Memoizing
  • Dynamic programming combines the best of both Greedy algorithms (fast) and Exhaustive search algorithms (find the optimum result).
    • Greedy algorithms doesn’t guarantee finding the global optimal result.
  • It gives us a way to design custom algorithms that systematically search all possibilities (thus guaranteeing correctness) while storing intermediate results to avoid recomputing (thus providing efficiency).
  • Dynamic programming algorithms are often easier to reinvent than to try to look up. That said, until you understand dynamic programming, it seems like magic. You have to figure out the trick before you can use it.
  • How to design an algorithm with DP technique:
    • Find the naive recursive solution (that have large time complexity)
    • Store partial/intermediate results (Memoizing) to prevent re-computation from occurring
📌
The core idea of DP: - Identify and solve subproblems - Use subproblems together to solve larger problem
  • DP is the right method for optimization problems on combinatorial objects that have an inherent left-to-right order among components like:
    • Left-to-right objects include character strings,
    • rooted trees,
    • polygons,
    • and integer sequences.
 
  • Five steps to solve DP problems:
      1. Visualize the problem
      1. Find an appropriate subproblem
      1. Find relationships among subproblems
      1. Generalize the relationship
      1. Implement by solving subproblems in order
 
 
  • There are two common methods to solve DP problems:
    • Memoization
    • Tabulation

Fibonacci using DP

  • The recursive algorithm doesn’t work well with large
    • It has a time complexity of
     
     
     

    Randomized Algo

    • Introducing randomness into our algorithms might speed things up, although perhaps at the expense of accuracy. But often we can reduce the possibility for error to be as low as we like, while still speeding up the algorithm.
    • Algorithms vary based on the result/output certainty. Given that is the desired output and the real output:
      • Exact or Deterministic Algorithms: equal
      • Approximation Algorithms: ’s rank is close to ’s rank
      • Probabilistic Algorithms: is usually
      • Heuristic Algorithms: ’s rank is usually close to ’s rank
      • Las Vegas Algorithms: We always find the maximum value, and “usually” we find it fast. Such algorithms have a guaranteed result, but do not guarantee fast running time.
      • Monte Carlo Algorithms: We find the maximum value fast, or we don’t get an answer at all (but fast). While such algorithms have good running time, their result is not guaranteed.

    Skip List

    • Like BSTs, Skip Lists are designed to overcome a basic limitation of array-based and linked lists: Either search or update operations require linear time.
    • It is a probabilistic data structure, because it makes some of its decisions at random.
    • The Skip List is easier to implement than known balanced tree structures
    • The Skip List is not guaranteed to provide good performance (where good performance is defined as Θ(log n) search, insertion, and deletion time), but it will provide good performance with extremely high probability (unlike the BST which has a good chance of performing poorly).
    • the Skip List illustrates a tension between the theoretical worst case (in this case, Θ(n) for a Skip List operation), and a rapidly increasing probability of average-case performance of Θ(log n), that characterizes probabilistic data structures
    notion image
     
     

    Resources

    Randomized Algorithms