Get Tokenomics

Tokenomics audits that name the actual problem

Independent multi-track review of an existing tokenomics design. Claims, market context, math, on-chain reality, mechanism integrity, inputs — every finding tied to a specific piece of evidence and ranked Critical, High, Medium, Low, or Note.

40+
tokenomics projects
6
parallel review tracks
240+
patterns library

What we audit

Six review tracks running in parallel. Every finding tied to specific evidence — a quote from a whitepaper, a cell in a model, a transaction on-chain, a market reference with a date. No vague concerns, no design opinions dressed up as objective findings.

Claims and narrative

Every quantitative claim from whitepapers, decks, and public communications pulled, verified, and tagged. We separate marketing language from defensible commitments. Mismatches between stated mechanics and what the model or contracts actually implement surface here first.

Market context

TAM, SAM, SOM, competitor benchmarks, regulatory framing — verified against current data on DefiLlama, CoinGecko, Token Terminal, and on-chain registries. Stale references and fabricated comparisons get flagged with a source and a snapshot date.

Mathematics and formulas

Every calculation in the model recomputed independently. Formula correctness in xlsx, dimensional consistency, edge-case behaviour. Unit checks across the entire model — currency, time, supply, percentage. Off-by-one errors in vesting schedules surface every time.

On-chain reality

Claimed TVL, holder counts, transaction volumes, treasury holdings — verified against blockchain data. BlockchainQuery for L1/L2 state, DefiLlama for protocol-level metrics, explorer-level checks for specific addresses. Marketing numbers vs the actual on-chain state.

Mechanism integrity

How supply, demand, incentives, and treasury mechanisms interact. Feedback loops, sink-source balance, perverse incentives. Where the system breaks under stress — depeg scenarios, mass-exit events, oracle failures, governance attacks.

Inputs and sources

Every input parameter in the model: where did it come from, when was it captured, is it defensible? Magic numbers, stale data, and unsourced assumptions surface immediately. The model is only as solid as its weakest input.

How we work

From a discovery call to a delivered report in five steps. Every step ends with a written artefact you can challenge.

?
01

Discovery

What is being audited (full system, a specific mechanic, model only, public materials only), who needs the result, what decision hangs on it. End state: a written audit brief both sides agree on.

ScopeStakeholdersBrief
TokenAlloc %VestingTeam15%24 moInvestors20%12 moCommunity40%36 moTreasury25%48 mo
03

Parallel analysis

Four review tracks run simultaneously — claims, market context, math, on-chain. Mechanism integrity and inputs are reviewed across all four. Every track produces a finding log with evidence per item and a severity proposal.

ClaimsMarketMathOn-chain
modelreportdeckdashboard
05

Report and walk-through

Written report (PDF plus Markdown source), severity-ranked findings with evidence per item, recommended actions prioritised by impact and effort, plus a presentation call. The report is built so a CFO, a CPO, or an investor can defend it independently.

ReportRecommendationsWalk-through
sourcesnapshotinput
02

Material intake

Whitepaper, deck, model, on-chain identifiers, public communications — everything goes into a structured corpus, indexed by claim and by mechanic. Nothing reviewed informally.

WhitepaperModelOn-chainCorpus
basebull/bear
04

Severity and synthesis

Findings ranked Critical, High, Medium, Low, Note. Cross-track issues elevated. The five to ten highest-impact items become the executive summary. Severity definitions are written into the report so nothing is hand-waved.

Critical/High/Med/Low/NoteCross-trackExec summary

Have a model or deck to audit?

Send the docs. We will come back with a price, a timeline, and a recommended audit depth — full multi-track or focused single-track.

Sectors we audit

Real mechanics from our 240+ patterns library, weighted toward what is most often mis-specified, mis-modelled, or mis-implemented.

DeFi

Bonding-curve edge casesLiquid-staking yield driftLending and collateral ratiosSubordination tranche mathInflation vs staking curve

GameFi

Action-reward economicsStreak multiplier sustainabilityGenesis NFT vs in-game token boundaryLootbox expected valueNFT vs non-NFT segments

RWA

Ownership NFTs — legal vs technicalCommodity token redemption mathSecurity rev-share waterfallsTokenized stock dividend logicOff-chain reserves attestation

DePIN

Mining reward halving cliffsIoT-device mint formulaProof-of-compute collateralASIC fleet economicsMulti-asset mining drift

L1/L2

PoS staking curveInflation vs staking ratioEIP-1559 dynamic pricingValidator reward concentrationValue-transfer fee splits

Governance

ve-tokenomics decayQuorum manipulabilityQuadratic-voting Sybil resistanceVoting committee centralisationFee-switch governance attack vectors

FAQ

When do you need a tokenomics audit?
Five common moments. Pre-TGE — before launch, to catch fatal flaws while changes are still cheap. Pre-fundraise — to defend the design to investors with an independent voice. Post-incident — after a depeg, a governance attack, an oracle failure, or significant unexpected price action. Pre-relaunch — when redoing tokenomics after a failed first attempt. Periodic — for live protocols, every 12 to 18 months, as the on-chain state and the market drift away from the original design.
How long does an audit take?
A focused single-track audit — claims-only or math-only on a small model — runs 1 to 2 weeks. A full multi-track audit on a complete tokenomics package (whitepaper, model, on-chain) runs 3 to 6 weeks depending on system complexity. We give a tighter estimate after the discovery call.
What deliverables ship?
A written report (PDF plus Markdown source so you can quote and circulate it), a severity-ranked finding log with evidence per item, recommended actions prioritised by impact and effort, and a walk-through call. The report is built for independent defence — your CFO, CPO, board, or investors can read it without us in the room and form their own view.
How is a tokenomics audit different from a smart-contract security audit?
Different products, different risks. A security audit (OpenZeppelin, Hacken, Trail of Bits and similar) reviews smart-contract code — reentrancy, overflows, access control, exploit vectors. We review economic design — claims, math, market context, on-chain reality, mechanism integrity, inputs. A system can pass a clean code audit and still have fatal economic design flaws. Both audits are needed for a serious launch; they answer different questions.
How is an audit different from modelling?
Audits review what already exists. Modelling builds it from scratch — or rebuilds the parts that do not add up. Both can be commissioned together: audit first to identify what is broken, modelling next to fix it. We run them as separate engagements with separate deliverables.
Can you audit tokenomics without an existing model?
Yes, with a caveat. Claims, market context, mechanism integrity, on-chain reality — all auditable from public materials and on-chain data alone. The math track requires either a model (xlsx, Python, anything) or detailed enough specs to reconstruct one. Without numbers to recompute, math findings are qualitative.
Do you sign NDAs?
Yes, before any project material is shared. Standard mutual NDAs — we have a template, or we sign yours. Audit material is treated as more sensitive than design material because findings can move markets if leaked.

Need an independent set of eyes on your tokenomics?

Send what exists — model, deck, whitepaper, on-chain identifier, anything. We will read it cold and come back with a focused audit proposal.