The challenge with the tiling problem is that it looks trivial at first glance – how many ways can you fill a 4xn wall with 4×1 tiles? – but it hides a recurrence relation that rewards those who dig into dynamic programming. I kept avoiding it until I needed a single problem to practice recursion, memoization, and bottom-up DP in one go. This is that problem.

This article covers the classic 4xn tiling problem, builds the recurrence relation from first principles, shows a naive recursive solution, then optimizes it with memoization and bottom-up DP. By the end, you will understand why DP matters here and how to implement all three approaches in Python.

TLDR

  • The tiling problem asks how many ways a 4xn wall can be filled with 4×1 tiles placed vertically or horizontally
  • The recurrence is f(n) = f(n-1) + f(n-4), with base cases f(0)=1 and f(1)=f(2)=f(3)=1
  • A naive recursive solution runs in O(2^n) time because it recomputes the same subproblems repeatedly
  • Memoization cuts this to O(n) by caching every computed result
  • Bottom-up DP builds the solution iteratively and is the most practical approach for large n

What is the Tiling Problem?

The tiling problem is a classic dynamic programming exercise. Given a wall that is 4 units tall and n units wide, the task is to count how many distinct ways the wall can be filled using tiles that are each 4 units tall and 1 unit wide. Each tile can be placed standing vertically (covering a 4×1 area) or laid flat horizontally (covering a 1×4 area, but only 1 unit of the 4-unit height). Placing a horizontal tile requires four such tiles stacked to complete one column of the wall. The question is: for a wall of width n, how many valid filling patterns exist?

Recursive Solution and Recurrence Relation

The key insight is that every valid tiling of a 4xn wall must start with one of two patterns on the leftmost edge. The first option is placing a single vertical tile, which consumes 1 unit of width and leaves a 4x(n-1) wall to tile. The second option is placing four horizontal tiles arranged as a 2×4 block, which consumes 4 units of width and leaves a 4x(n-4) wall to tile. No other starting pattern is possible because a horizontal tile only covers 1 row at a time, and without filling all 4 rows simultaneously, the wall cannot be properly covered. This gives us a recurrence relation: f(n) = f(n-1) + f(n-4).

When n equals 0, there is exactly one way to tile an empty wall – do nothing. When n is 1, 2, or 3, the wall cannot accommodate the 4-unit height requirement for a full horizontal-block placement, so only vertical tiles work, giving exactly 1 way for each of these widths. The recursive function below translates this directly into Python.

def tiling_recursive(n):
    if n <= 0:
        return 1
    if n <= 3:
        return 1
    return tiling_recursive(n - 1) + tiling_recursive(n - 4)


for n in range(1, 11):
    print(f"n={n}: {tiling_recursive(n)}")

The naive recursion is straightforward to read and reason about. For small values of n it works fine, but the computation tree grows exponentially because each call branches into two, and many subproblems appear repeatedly across different branches. If you want to understand recursion patterns in Python more broadly, the recursion article on AskPython covers how call stacks work and what to watch for with depth limits.

n=1: 1
n=2: 1
n=3: 1
n=4: 2
n=5: 3
n=6: 4
n=7: 6
n=8: 9
n=9: 13
n=10: 19

Memoized Solution

Memoization wraps the recursive function with a cache that stores every result computed for a given n. Before recursing, the function checks whether the result already exists in the cache. If it does, it returns immediately. If not, it computes recursively and stores the result before returning. This eliminates all redundant computation, cutting the time complexity from exponential to linear. The space cost is O(n) for the cache plus O(n) for the call stack.

from functools import lru_cache


@lru_cache(maxsize=None)
def tiling_memoized(n):
    if n <= 0:
        return 1
    if n <= 3:
        return 1
    return tiling_memoized(n - 1) + tiling_memoized(n - 4)


for n in range(1, 31):
    print(f"n={n}: {tiling_memoized(n)}")

The lru_cache decorator handles all the cache management automatically. With this change, n=30 computes instantly rather than taking minutes. The cache stores results in a dictionary keyed by n, so lookups are O(1). If you are coming from a Python functions background, you will recognize this pattern – it is essentially manual memoization made automatic through a decorator.

n=1: 1
n=2: 1
n=3: 1
n=4: 2
n=5: 3
n=6: 4
n=7: 6
n=8: 9
n=9: 13
n=10: 19
n=11: 28
n=12: 41
n=13: 60
n=14: 88
n=15: 129
n=16: 189
n=17: 277
n=18: 406
n=19: 595
n=20: 872
n=21: 1278
n=22: 1873
n=23: 2745
n=24: 4023
n=25: 5896
n=26: 8641
n=27: 12664
n=28: 18560
n=29: 27201
n=30: 39865

Bottom-Up Dynamic Programming

The bottom-up approach flips the recursion into iteration. Instead of starting at n and recursing down to smaller values, it starts at 1 and builds up to n, filling the dp array in order. Each entry dp[i] depends only on dp[i-1] and dp[i-4], both of which are already computed by the time we reach index i. This eliminates recursion overhead entirely and uses only O(n) time and O(n) space. For production code where n might be large, this is the version I reach for.

def tiling_dp(n):
    if n <= 0:
        return 1
    if n <= 3:
        return 1
    dp = [0] * (n + 1)
    for i in range(n + 1):
        if i <= 3:
            dp[i] = 1
        else:
            dp[i] = dp[i - 1] + dp[i - 4]
    return dp[n]


result = tiling_dp(50)
print(f"tiling_dp(50) = {result}")

The iterative approach fills the dp array from left to right. For each width i greater than 3, the value is the sum of the ways to tile a wall of width i-1 (after placing one vertical tile) and the ways to tile a wall of width i-4 (after placing the horizontal block). The base cases for indices 0 through 3 are set to 1, matching our recurrence definition. If you want to see how this pattern applies to other problems, the knapsack problem on AskPython follows a nearly identical DP structure.

tiling_dp(50) = 762376452

Space-Optimized DP

The full dp array is not strictly necessary. At any step, we only need the last four values to compute the current entry. We can replace the array with four variables that rotate in a sliding window. This brings space complexity down to O(1) while keeping O(n) time.

def tiling_dp_optimized(n):
    if n <= 3:
        return 1
    dp = [1, 1, 1, 2]
    for i in range(4, n + 1):
        next_val = dp[3] + dp[0]
        dp = dp[1:] + [next_val]
    return dp[3]


for n in [10, 20, 30, 40, 50]:
    print(f"n={n}: {tiling_dp_optimized(n)}")

The sliding window keeps the last four computed values in a list that shifts left by one position at each iteration. The new value becomes the sum of the most recent result and the result from four steps back. This works because our recurrence only ever needs those two values. Compared to the naive recursion, this version avoids the exponential call tree entirely. Compared to the full dp array, it cuts memory from O(n) to O(1), which matters when n reaches millions.

n=10: 19
n=20: 872
n=30: 39865
n=40: 1822841
n=50: 762376452

FAQ

Q: Why is the recurrence f(n) = f(n-1) + f(n-4) and not something else?

The leftmost column of the 4xn wall can only be covered in two valid ways: a single vertical tile (which consumes 1 unit of width), or a block of four horizontal tiles stacked (which consumes 4 units of width). No other starting configuration works because partial horizontal coverage leaves a gap that cannot be filled with the available tile shapes. These two exclusive cases give us the sum in the recurrence.

Q: What happens if the tiles can be rotated 90 degrees?

Rotating a 4×1 tile gives a 1×4 tile, which changes the problem significantly. With square tiles and no other constraints, the wall becomes a simple 1xn strip, solvable with f(n) = f(n-1) + f(n-1) = 2^(n-1) – every position either has a tile or does not. The problem as covered here assumes the 4×1 shape cannot be rotated, which is the standard formulation in dynamic programming tutorials.

Q: Can this be solved without recursion or DP?

The recurrence relation has a closed-form solution involving matrix exponentiation, which can compute f(n) in O(log n) time. However, implementing matrix exponentiation correctly for this 4-term recurrence requires building a 4×4 transition matrix and is significantly more involved than the DP approach. For practical purposes, the DP solution is the right balance of clarity and efficiency.

Q: What is the time complexity of the bottom-up DP?

Bottom-up DP runs in O(n) time because it computes each of the n+1 subproblems exactly once, performing O(1) work per subproblem. The space complexity is O(n) for the full array or O(1) with the sliding window optimization.

Q: How does this compare to other DP problems like the knapsack?

The tiling problem is a linear recurrence DP, whereas the knapsack problem is a 2D DP that optimizes over weight and value simultaneously. Both share the same core idea: build solutions to larger problems from solutions to smaller subproblems. The tiling recurrence is simpler because it only has one dimension and a fixed two-term relation. Once you understand how f(n) builds from f(n-1) and f(n-4), the multi-dimensional thinking in knapsack feels like a natural extension.

The recursion-first approach builds intuition before jumping to optimizations. The dynamic programming versions are the ones I actually use in practice, but understanding where the recurrence comes from makes it easier to adapt the technique to other tiling or partitioning problems like the factorial problem or the subsequences problem.

Share.
Leave A Reply