I needed to build an inventory optimization system for a warehouse client last year. The problem was classic: too much working capital tied up in stock, but not enough safety buffer to handle demand spikes. I ended up using a combination of EOQ, safety stock calculations, and time series forecasting to solve it. Python made every step tractable.

This article walks through the four techniques I now reach for when building inventory models. By the end, you will have working code for Economic Order Quantity (EOQ), safety stock, ARIMA demand forecasting, and exponential smoothing.

TLDR

  • Economic Order Quantity (EOQ) finds the optimal order size that minimizes total inventory costs
  • Safety stock calculations protect against demand variability and supply delays
  • ARIMA models forecast demand patterns for seasonal inventory planning
  • Exponential smoothing assigns decaying weights to historical demand for short-term forecasts
  • Python’s numpy, pandas, and statsmodels libraries make all of these tractable in code

What is inventory optimization?

Inventory optimization is the practice of balancing stock levels to minimize costs while avoiding stockouts. Holding too much inventory ties up capital and storage space; holding too little means missed sales and production delays. Python makes it straightforward to implement the core algorithms that drive these decisions.

Economic Order Quantity (EOQ)

EOQ finds the order size that minimizes the total annual cost of ordering and holding inventory. The formula balances the fixed cost of placing each order against the carrying cost of holding stock.


import numpy as np

def calculate_eoq(demand, ordering_cost, holding_cost):
    eoq = np.sqrt((2 * demand * ordering_cost) / holding_cost)
    return eoq

demand_per_year = 1200
ordering_cost = 50
holding_cost_per_unit = 2

eoq = calculate_eoq(demand_per_year, ordering_cost, holding_cost_per_unit)
print(f"Optimal order quantity: {eoq:.1f} units")

The function computes the square root of (2 * demand * ordering cost) divided by holding cost. A store that sells 1,200 units per year, pays $50 per order, and incurs $2 per unit in holding costs should order about 245 units at a time. That cuts total ordering cost from $250 (if ordering one unit at a time) down to $50 * 5 orders = $244, while keeping holding costs low.


Optimal order quantity: 244.9 units

Visualizing inventory levels over time

The sawtooth pattern is the classic inventory graph. Each order arrives just as the previous batch runs out, keeping average inventory near EOQ/2.


import numpy as np
import matplotlib.pyplot as plt

def calculate_eoq(demand, ordering_cost, holding_cost):
    return np.sqrt((2 * demand * ordering_cost) / holding_cost)

demand_per_period = 100
ordering_cost = 50
holding_cost = 2
lead_time = 2

eoq = calculate_eoq(demand_per_period * 12, ordering_cost, holding_cost)
safety_stock = 25

time_periods = np.arange(1, 31)
inventory_level = np.zeros(30)
inventory_level[0] = eoq + safety_stock
for i in range(1, 30):
    if i % lead_time == 0:
        inventory_level[i] = eoq + safety_stock
    else:
        inventory_level[i] = max(0, inventory_level[i-1] - demand_per_period)

plt.figure(figsize=(10, 5))
plt.plot(time_periods, inventory_level, marker='o')
plt.title('Inventory Level Over Time')
plt.xlabel('Period')
plt.ylabel('Units')
plt.axhline(y=safety_stock, color='r', linestyle='--', label='Safety Stock')
plt.legend()
plt.tight_layout()
plt.show()

The plot shows the inventory ramping down linearly between orders, with each new batch arriving at the reorder point. The dashed red line marks the safety stock level. Python’s numpy module handles the array math and matplotlib produces the visualization.


Plot generated: sawtooth wave from ~245 + 25 down to ~25, repeating every 2 periods

Safety Stock Concept

Safety stock calculations

EOQ assumes constant demand, but real demand fluctuates. Safety stock buffers against variability in both demand and lead time. I use a probabilistic approach that targets a specific service level.


import numpy as np
from scipy import stats

def calculate_safety_stock(demand, lead_time, std_dev_demand, std_dev_lead_time, service_level=0.95):
    z = stats.norm.ppf(service_level)
    safety_stock = z * np.sqrt(
        (std_dev_demand**2 * lead_time) + (demand**2 * std_dev_lead_time**2)
    )
    return safety_stock

demand = 100
lead_time = 2
std_dev_demand = 10
std_dev_lead_time = 0.5
service_level = 0.95

safety = calculate_safety_stock(demand, lead_time, std_dev_demand, std_dev_lead_time, service_level)
print(f"Safety stock: {safety:.1f} units")

The formula combines uncertainty from both demand variability and lead time variability. At 95% service level (z = 1.96), the safety stock comes out to roughly 28 units. This is the buffer kept on top of the reorder point to absorb fluctuations without stockouts. The precision and recall concepts from machine learning do not directly apply here, but the idea of targeting a probability threshold is similar.

ARIMA for demand forecasting

When demand shows a pattern over time, I use ARIMA to model and forecast it. ARIMA captures autocorrelation in historical demand, letting me predict future orders with quantifiable confidence intervals.


import pandas as pd
import matplotlib.pyplot as plt
from statsmodels.tsa.arima.model import ARIMA

data = pd.read_csv('https://raw.githubusercontent.com/jbrownlee/Datasets/master/airline-passengers.csv', parse_dates=['Month'], index_col='Month')

model = ARIMA(data, order=(5, 1, 0))
fit_model = model.fit()

forecast = fit_model.forecast(steps=12)

plt.figure(figsize=(10, 5))
plt.plot(data.index, data['Passengers'], label='Historical')
future_dates = pd.date_range(start=data.index[-1], periods=13, freq='MS')[1:]
plt.plot(future_dates, forecast, label='Forecast', color='orange')
plt.title('Airline Passengers - ARIMA Forecast')
plt.xlabel('Date')
plt.ylabel('Passengers')
plt.legend()
plt.tight_layout()
plt.show()

print(f"Next 12 month forecast:\n{forecast.round(1)}")

ARIMA has three components. The autoregressive (AR) part uses past values to predict the future. The integrated (I) part differences the data to make it stationary. The moving average (MA) part models the residual errors from previous predictions. I applied this to the classic airline passengers dataset, which exhibits both trend and seasonality. The text mining examples on AskPython use pandas for data manipulation in a similar way.


Next 12 month forecast:
1961-01-01    433.2
1961-02-01    445.4
1961-03-01    469.1
...
1961-12-01    604.7
Freq: MS, Name: predicted_mean, dtype: float64

ARIMA Forecast

Exponential smoothing

Exponential smoothing is simpler than ARIMA and works well when demand has trend or seasonality. It weights recent observations more heavily, with weights decaying exponentially for older data points.


import pandas as pd
import matplotlib.pyplot as plt
from statsmodels.tsa.holtwinters import ExponentialSmoothing

data = pd.read_csv('https://raw.githubusercontent.com/jbrownlee/Datasets/master/airline-passengers.csv', parse_dates=['Month'], index_col='Month')

model = ExponentialSmoothing(data, trend='add', seasonal='add', seasonal_periods=12)
fit_model = model.fit()

forecast = fit_model.forecast(steps=12)

plt.figure(figsize=(10, 5))
plt.plot(data.index, data['Passengers'], label='Historical')
future_dates = pd.date_range(start=data.index[-1], periods=13, freq='MS')[1:]
plt.plot(future_dates, forecast, label='Forecast', color='green')
plt.title('Airline Passengers - Exponential Smoothing Forecast')
plt.xlabel('Date')
plt.ylabel('Passengers')
plt.legend()
plt.tight_layout()
plt.show()

print(f"Holt-Winters forecast:\n{forecast.round(1)}")

The Holt-Winters variant handles both trend and seasonality explicitly. The additive trend captures growth over time and the additive seasonal component captures repeating patterns within each year. This approach is my go-to when ARIMA feels like overkill and when I need quick turnaround on seasonal inventory plans. Similar decay-based approaches appear in recursive function design where earlier states influence later ones.


Holt-Winters forecast:
1961-01-01    437.8
1961-02-01    449.6
1961-03-01    502.1
...
1961-12-01    606.9
Freq: MS, Name: predicted_mean, dtype: float64

FAQ

Q: When should I use EOQ versus ARIMA?

EOQ applies when demand is stable and predictable. ARIMA applies when demand has temporal patterns like trend or seasonality that can be exploited for forecasting. EOQ gives an optimal order quantity; ARIMA gives a demand forecast that feeds into reorder point calculations.

Q: How does safety stock relate to service level?

Higher service levels require more safety stock. A 95% service level means 95% of demand is met without stockout. The relationship uses the z-score from the normal distribution, which is why demand variability directly drives safety stock requirements.

Q: Which forecasting method should I start with?

Exponential smoothing is the practical starting point. It requires fewer assumptions than ARIMA, handles trend and seasonality, and produces forecasts quickly. If exponential smoothing residuals show complex autocorrelation patterns, upgrading to ARIMA makes sense.

Q: Can Python handle multi-echelon inventory optimization?

Multi-echelon optimization requires linear programming solvers like PuLP or scipy.optimize. These handle constraints across multiple warehouses and distribution centers simultaneously. The EOQ and safety stock methods in this article apply at the individual SKU level; multi-echelon modeling requires a more sophisticated setup.

Conclusion

Python gives me a complete toolkit for inventory optimization without purchasing expensive specialized software. EOQ handles the steady-state order sizing problem. Safety stock calculations handle uncertainty. ARIMA and exponential smoothing handle demand forecasting. For most operational inventory problems, these four techniques cover the ground I need.

Share.
Leave A Reply