Dynamic programming (DP) is a competitive programming approach. DP decomposes problems with recurrence relations, where recursive terms are values calculated once (i.e., values are saved to avoid re-computation). It evaluates in a valid order of dependencies.

As a requirement, we must have a recurrence relation. This doesn’t mean recursive execution, because it’s just a mathematical construct that describes a relationship between states. Decisions are values we compute once, and aren’t tasks run every time. More parameters means more run time, so we should minimise the parameters needed for a decision. Optimising DP means minimising parameters means less time needed to complete a decision.

DP combines the correctness of brute force, and the efficiency of greedy algorithms.

Very high-level steps

We have a particular methodology:

  • Visualise examples (start small, define base cases, and gradually increase the problem size)
  • Determine how the problem exhibits optimal substructure, i.e., characterise the structure of an optimal solution.
  • Find a relationship between sub-problems, i.e., defining the rule of an optimal solution recursively in terms of the optimal solution to sub-problems.
  • Compute the value of an optimal solution, typically bottom-up solving sub-problems in order and use memoisation.
  • Construct the optimal solution from the computed information (if you want more than just the value of the optimal solution, use memory).

High-level approach

Break a solution down into simple, low-level, small, structured decisions from an example. From then, we can generalise this simple solution for bigger problems. Consider the actions we have to take given a simple decision.

If we need to use recursion, we should be really sure what recurrence relation we’re modelling here. We also want to make sure induction holds: that all recursive calls must lead to a base case, and that all base cases are correct. This is the most basic form our DP solution could take. Odds are it won’t have a good time or space complexity compared to an optimised version.

Optimisation

To convert to an iterative method, we need to:

  • Think about the bounds our values should be within. This allows us to allocate space for our cache, and also determine the bounding for a loop.
    • The values are based on the function’s valid arguments.
    • Our base cases and starting arguments can be used as endpoints.
  • Identify any dependencies. This allows us to figure out the parameters of our loops. The current iteration of the loop might depend on its recursive terms (i.e., they must be calculated before the current term).
    • Identify any variables our recursive call’s return value relies on.
      • What do those require to be computed? Likely a recursive call. That means we depend on some past value (does the other value increase or decrease?).
    • In our recursive implementation, if we successively decrease a parameter until the base case, this means our iterative solution must increase.
  • Think about the order of operations that we execute, for a given value of our bounded values. This allows us to figure out the loop precedence.
    • For every recursive term, at least 1 argument must match the direction of its parameter’s for loop.
    • Outermost loop should only have dependencies in one direction.

In our conversion, we convert recursive function calls with array accesses. Then we can apply space saving tricks to save on space complexity:

  • First, identify constant dependencies.
    • i.e., for each iteration of the loop, do we only rely on certain values of the array? These are constant dependencies.
    • That parameter must be able to be put in the outer loop, based on the dependencies earlier.
    • Applies recursively: after space saving one parameter, we can try to space the inner loops as well.
  • Replace array accesses with variables.
    • Replace returns with assignments.
  • Move to outer loop and shift.
    • Shift the values. If we save two values: dp_i2, dp_i1, then we need to overwrite it (by shifting one back or one forward).
  • Then initialise everything we use.
    • dp_i might be computed/initialised within the loop. But everything else can be initialised outside (or in a previous iteration).

What this means is the space saving technique largely has two variants:

  • Either we save a constant number of variables.
  • Or we save a single vector and possibly overwrite it for each loop iteration.
    • We can also modify in place, if that’s possible!

TL;DR. We only save what we need. This is easy to decipher just from the recurrence. If our recurrence needs only the previous iteration, that’s all we need to save. If it goes back further, then we need to save more values.

Time complexity

For recursive approaches, it can be hard to figure out the time complexity. We can follow this guiding principle where we multiply:

  • How many terms do we have to compute? How many unique states do we visit?
    • i.e., for a function numWays(n, amount) where both parameters vary, we have time complexity .
  • And how much local computation do we have to do? Assume things are cached, what is the cache complexity?
    • In many cases, everything is trivially . We should assume recursive calls are .

Cache implementation

There’s two types of caching we can do. The first is memoisation, which is a top down approach for recursive solutions. Top-down means we effectively reduce the operation space each time (i.e., we go from len(nums) to 0).

The other is tabulation, which is a bottom-up approach for iterative solutions. In both cases, we store the results of the function in an array after they’re computed. Bottom-up means we increase the operation space, i.e., we go from 0 to len(nums).

In both cases, we can implement this with a static storage container in C++. In Python, we can use the @Cache decorator.

static vector<int> tab;

Vectors (for tabulation) are great for caching because they’re fast to access, but they require that we pass non-negative indices. In these cases, we’d have to add if-checks. Alternatively, we could use a hash map (for memoisation), which is good for sparse bounds or complex arguments.

Vectors can be allocated and initialised at compile time with constinit since C++20. This tricks LeetCode into thinking we’ve used less memory than expected.

Sub-pages

Resources