Get Tokenomics

Tokenomics Simulations: Monte Carlo to Agent-Based Models

Simulation methods for tokenomics: sensitivity analysis, scenario analysis, Monte Carlo, agent-based modeling. Python code, charts, and method comparison.

Why Simulate Tokenomics

A spreadsheet model is a single scenario: “given these parameters, here’s the result.” But parameters are always imprecise. Will 5,000 or 50,000 users show up? What percentage will stake? When will investors start selling?

Simulation answers not “what will happen” but “what could happen and with what probability.” Instead of one forecast — a distribution of outcomes. Instead of “the price will be $0.50” — “in 90% of scenarios, the price stays above $0.30.”

This article covers four levels of simulation, from simple to complex. Each level adds precision but also requires more data and code.

Four levels of tokenomics simulationPyramid from simple to complex: sensitivity analysis, scenario analysis, Monte Carlo, agent-based modelingAgent-Based Modelingemergent effects, cadCADMonte Carlo1,000+ runs, distributions, PythonScenario Analysis3–5 scenarios, Google SheetsSensitivity Analysis1 parameter, N data points, Google Sheets

Level 1: Sensitivity Analysis

The simplest method. Take a spreadsheet model, vary one parameter while holding all others fixed. See how the output depends on that parameter.

When to Use

  • At the early design stage — to understand which parameters are critical
  • When tuning bonding curve, emission, or vesting parameters
  • To answer “what breaks first”

Example: Staking Sensitivity to APR

A protocol emitting 800 tokens/day. Question: at what staking percentage does APR fall below 5% (the threshold where large stakers leave)?

Result: above ~58% staking, APR approaches 5%. The exact threshold is ~58.4%. If the model assumes 60% staking, the system operates below the threshold.

Staking percentageAPRStatus
20%14.6%Safe
40%7.3%Acceptable
58%5.03%Near threshold
60%4.9%Below threshold
80%3.7%Critical
Python: sensitivity analysis code
import numpy as np
import matplotlib.pyplot as plt

total_supply = 10_000_000
daily_rewards = 800

staking_pcts = np.linspace(0.1, 0.9, 50)  # 10% to 90% staking
aprs = (daily_rewards * 365) / (total_supply * staking_pcts)

fig, ax = plt.subplots(figsize=(10, 5))
ax.plot(staking_pcts * 100, aprs * 100, linewidth=2.5, color='#2563eb')
ax.axhline(y=5, color='#dc2626', linestyle='--', label='5% APR threshold')
ax.fill_between(staking_pcts * 100, aprs * 100, 5,
                where=(aprs * 100 < 5), alpha=0.15, color='#dc2626')
ax.set_xlabel('Staking percentage (%)')
ax.set_ylabel('APR (%)')
ax.set_title('APR sensitivity to staking percentage')
ax.legend()
ax.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()
Method limitation
Sensitivity analysis varies one parameter at a time. In reality, parameters are interdependent: user growth increases both staking and sell pressure. To account for interdependencies, you need the next level — scenario analysis.

Level 2: Scenario Analysis

Fix a set of parameters for each scenario. The standard approach is three scenarios:

ParameterPessimisticBaseOptimistic
Users (month 12)5,00020,00080,000
Staking %30%50%70%
Churn15%/mo8%/mo3%/mo
Sell pressure (investors)80% after cliff50%20%

What It Shows

In the pessimistic scenario, sell pressure is 1.6M tokens/mo (80% of 2M unlock), and free float grows faster (few stakers). This is a double hit on price.

In the optimistic scenario, only 400K/mo is sold and 70% is staked. Sell pressure is 4x lower.

MetricPessimisticBaseOptimistic
Sell pressure1.6M/mo1.0M/mo0.4M/mo
Free float (month 12)23.8M17.0M10.2M
Monthly sell / float ratio (month 12)6.7%5.9%3.9%
Python: scenario analysis code
import numpy as np
import matplotlib.pyplot as plt

months = np.arange(1, 25)

scenarios = {
    'Pessimistic': {
        'users_final': 5_000,
        'stake_pct': 0.30,
        'sell_pressure': 0.80,
        'color': '#dc2626'
    },
    'Base': {
        'users_final': 20_000,
        'stake_pct': 0.50,
        'sell_pressure': 0.50,
        'color': '#2563eb'
    },
    'Optimistic': {
        'users_final': 80_000,
        'stake_pct': 0.70,
        'sell_pressure': 0.20,
        'color': '#16a34a'
    }
}

total_supply = 100_000_000
initial_circulating = 10_000_000  # TGE
monthly_unlock = 2_000_000        # investor vesting

fig, axes = plt.subplots(1, 2, figsize=(14, 5))

for name, s in scenarios.items():
    circulating = np.zeros(len(months))
    net_sell = np.zeros(len(months))

    for i, m in enumerate(months):
        unlocked = min(initial_circulating + monthly_unlock * m, total_supply)
        staked = unlocked * s['stake_pct']
        free_float = unlocked - staked
        sell_tokens = monthly_unlock * s['sell_pressure']

        circulating[i] = free_float
        net_sell[i] = sell_tokens

    axes[0].plot(months, circulating / 1e6, label=name,
                 color=s['color'], linewidth=2)
    axes[1].plot(months, net_sell / 1e6, label=name,
                 color=s['color'], linewidth=2)

axes[0].set_title('Free float (M tokens)')
axes[0].set_xlabel('Month')
axes[0].legend()
axes[0].grid(True, alpha=0.3)

axes[1].set_title('Sell pressure (M tokens/mo)')
axes[1].set_xlabel('Month')
axes[1].legend()
axes[1].grid(True, alpha=0.3)

plt.tight_layout()
plt.show()
The scenario analysis problem
Three scenarios are three data points. What’s the probability of each? What lies between them? Scenario analysis doesn’t answer these questions. For that, you need Monte Carlo.

Level 3: Monte Carlo

The Monte Carlo method runs thousands of random scenarios. Instead of fixed parameters, you define distributions: “users will be between 5,000 and 80,000, most likely around 20,000.” The model runs 1,000–10,000 times with different random values.

The result is not a single point or three points, but a full distribution of outcomes with percentiles and confidence intervals.

How It Works

  1. Define input parameters and their distributions
  2. On each iteration, sample values from distributions
  3. Run the model, record the result
  4. Repeat 1,000–10,000 times
  5. Analyze the result distribution

Choosing Distributions

ParameterDistributionWhy
Number of usersLog-normalGrowth can be explosive but never negative
Staking percentageBeta(5,5) or Truncated NormalBounded between 0 and 1; Beta(5,5) centers at 50%, tune α/β to shift the mode
Sell pressureBeta(2,5) or Truncated NormalSame — a proportion between 0 and 1; Beta(2,5) skews toward low sell pressure
Time to sellExponentialMost sell quickly, few wait long
Token priceLog-normalMultiplicative dynamics, never negative

Example: Treasury Sustainability

A protocol raised $5M at TGE. The team spends money but earns fees from users. Question: will the treasury last 36 months?

The model has four parameters. Each is sampled from a distribution because the exact value is unknown upfront:

ParameterDistributionRangeRationale
Monthly burn rateLog-normal (median $150K, σ=0.3)$90K–$250KSalaries, infrastructure, marketing. Log-normal because expenses can’t be negative but can spike
User growthNormal (μ=8%, σ=4%)0%–16%/moOrganic growth with high uncertainty
Revenue per userUniform ($2–$8/mo)$2–$8Protocol fees. Range based on benchmarks (DeFi: $3–$5, GameFi: $1–$2)
Initial usersLog-normal (median 2,000, σ=0.5)800–5,000Depends on TGE marketing success

Dependencies within the model:

  • Revenue = users × revenue_per_user (more users → more revenue)
  • Burn rate grows 2%/mo (salary inflation, team growth)
  • Treasury balance = previous balance + revenue − expenses
  • If balance hits 0 — the protocol can’t fund operations

Results from 2,000 runs:

MetricValue
Median (P50) balance at month 24~$1.4M
5th percentile (P5) at month 24~$0.0M
95th percentile (P95) at month 24~$4.2M
Share of runs with zero balance by month 36~48%

Key takeaway: in ~48% of runs, the treasury is depleted before month 36. This means with current parameters, the protocol has only ~52% chance of surviving three years without additional fundraising. If the acceptable risk threshold is 5%, you need to either cut the burn rate or increase initial treasury to ~$10M.

Python: Monte Carlo simulation code
import numpy as np
import matplotlib.pyplot as plt

np.random.seed(42)
n_simulations = 2000
n_months = 36
initial_treasury = 5_000_000  # $5M

results = np.zeros((n_simulations, n_months))

for sim in range(n_simulations):
    treasury = initial_treasury

    # Sample parameters once per run (inter-run variability).
    # In more advanced models, parameters can vary month-to-month.
    # Note: parameters are sampled independently here — see "pitfalls" below.
    monthly_burn = np.random.lognormal(mean=np.log(150_000), sigma=0.3)
    user_growth = np.random.normal(0.08, 0.04)  # 8% ± 4% growth/mo
    revenue_per_user = np.random.uniform(2, 8)   # $/user/mo
    initial_users = np.random.lognormal(mean=np.log(2000), sigma=0.5)

    users = initial_users

    for month in range(n_months):
        users *= (1 + max(user_growth + np.random.normal(0, 0.02), -0.1))
        revenue = users * revenue_per_user
        burn = monthly_burn * (1 + 0.02 * month)  # expenses grow 2%/mo

        treasury = treasury + revenue - burn
        results[sim, month] = max(treasury, 0)

# === Visualization ===
fig, axes = plt.subplots(1, 2, figsize=(14, 5))

months = np.arange(1, n_months + 1)
p5 = np.percentile(results, 5, axis=0)
p25 = np.percentile(results, 25, axis=0)
p50 = np.percentile(results, 50, axis=0)
p75 = np.percentile(results, 75, axis=0)
p95 = np.percentile(results, 95, axis=0)

axes[0].fill_between(months, p5/1e6, p95/1e6, alpha=0.1, color='#2563eb')
axes[0].fill_between(months, p25/1e6, p75/1e6, alpha=0.2, color='#2563eb')
axes[0].plot(months, p50/1e6, color='#2563eb', linewidth=2, label='Median')
axes[0].plot(months, p5/1e6, color='#dc2626', linewidth=1,
             linestyle='--', label='5th percentile')
axes[0].axhline(y=0, color='black', linewidth=0.5)
axes[0].set_xlabel('Month')
axes[0].set_ylabel('Treasury ($M)')
axes[0].set_title('Monte Carlo: treasury balance')
axes[0].legend()
axes[0].grid(True, alpha=0.3)

bankrupt_by_month = np.zeros(n_months)
for month in range(n_months):
    bankrupt_by_month[month] = np.mean(results[:, month] == 0) * 100

axes[1].bar(months, bankrupt_by_month, color='#dc2626', alpha=0.7)
axes[1].set_xlabel('Month')
axes[1].set_ylabel('% of runs with empty treasury')
axes[1].set_title('Cumulative bankruptcy probability')
axes[1].grid(True, alpha=0.3)

plt.tight_layout()
plt.show()

How to Read Results

MetricWhat it showsExample
Median (P50)Most likely outcomeTreasury = $1.4M at month 24
5th percentile (P5)Worst realistic scenarioTreasury = $0.0M at month 24
95th percentile (P95)Best realistic scenarioTreasury = $4.2M at month 24
Probability of eventChance of a specific outcome48% of runs: treasury = 0 by month 36
VaR (Value at Risk)Maximum loss at given probabilityVaR 95% at month 12: treasury loss ≤ $3.6M (P5 balance = $1.4M)
The 5th percentile rule
Design tokenomics so the system remains functional at the 5th percentile (worst 5% of runs). If at P5 the treasury hits zero at month 18 but runway should be 24 — the model needs rethinking.

Common Mistakes

Monte Carlo pitfalls

  • Correlated parameters sampled independently. If user count grows, treasury load grows too. Use copulas or joint distributions
  • Distributions too wide. "Users from 100 to 10,000,000" is not informative. Narrow ranges based on comparables
  • Too few iterations. 100 runs won't produce stable percentiles. Minimum 1,000, ideally 5,000
  • Ignoring tails. The average result isn't interesting — extreme scenarios matter
  • Level 4: Agent-Based Modeling

    Monte Carlo varies parameters, but the model within each run remains deterministic: formulas compute from start to finish. Agent-based modeling (ABM) adds another layer — the behavior of individual participants.

    In ABM, each user is a separate agent with their own balance, strategy, and decision rules. At each step, agents react to the current system state, and their actions change that state for everyone else.

    This enables modeling emergent effects: cascading staking exits, governance attacks, bank runs on liquidity pools — everything that can’t be expressed as a formula.

    When ABM Is Needed

    • Staking with uneven distribution (whales)
    • AMM and liquidity pools
    • Governance with voting
    • Any system with feedback loops between participants

    Tools for ABM in tokenomics: radCAD (actively maintained Rust+Python framework by CADLabs), Mesa (general-purpose ABM framework), and cadCAD (the original crypto-economic simulation framework, largely unmaintained since 2022).

    Method Comparison

    Sensitivity analysisScenarioMonte CarloABM
    ComplexityLowLowMediumHigh
    Number of outcomesN points3–51,000+1,000+
    Parameter interdependenciesNoManualVia distributionsVia behavior
    Emergent effectsNoNoNoYes
    ToolsGoogle SheetsGoogle SheetsPython / RPython (radCAD, Mesa)
    When to useEarly stageInvestor presentationModel stress testComplex mechanisms

    Which Method at Which Stage

    Workflow

  • Sensitivity analysis — identify which parameters are critical. This takes an hour in a spreadsheet
  • Scenario analysis — show 3 scenarios to investors and team. A shared language for decision-making
  • Monte Carlo — test robustness: "in what percentage of runs does everything break?" Requires Python
  • Agent-based modeling — if the system has participants with competing strategies and feedback between actions
  • Combining Methods

    In practice, methods don’t exclude each other. A typical project workflow:

    1. Spreadsheet — allocation, vesting, basic unit economics
    2. Sensitivity analysis — find the parameters the model is most sensitive to
    3. Monte Carlo — 2,000 runs on key parameters, get percentiles
    4. ABM — for critical mechanisms (staking, AMM, governance), build an agent model and run 100+ iterations

    Each step informs the next: sensitivity analysis shows what to vary in Monte Carlo. Monte Carlo shows where ABM is needed.

    P(system_functional) ≥ 0.95
    • Goal: system remains functional in at least 95% of Monte Carlo runs
    • Verify separately: the system must also survive at the 5th percentile of input parameters

    Practical Guidelines

    When a Spreadsheet Is Enough

    • Allocation and vesting — fixed schedules, nothing to simulate
    • Unit economics at early stage — no data yet for distributions
    • Presenting the concept to investors — simplicity is needed, not percentiles

    When You Need Monte Carlo

    • Treasury design — the key question is “will the money last”
    • Emission parameter tuning — at what values does inflation spiral out of control
    • Runway estimation — project lifespan under different growth scenarios

    When You Need ABM

    • Staking with uneven distribution (whales)
    • AMM and liquidity pools
    • Governance with voting
    • Any system with feedback loops between participants

    Need a simulation for your project?

    We stress-test tokenomics: from Monte Carlo to agent-based modeling. We find weaknesses before launch.

    Get in touch