Where money flows in the Veritas economy.
Four distinct investment paths, the unit economics of each, and how the oracle-services industry has actually built revenue in comparable shapes. Realistic numbers, with the failure cases.
This page describes the investment shapes that would exist if the Veritas Protocol becomes operational. There is no live token, no offering, no fund. The protocol is in v0.2 working-paper stage. Several of the structures described here cannot legally exist until specific legal entities are formed and regulatory positions are confirmed (see the critical review for the v0.3 work that has to come first).
If you are a potential funder, investor, foundation, or partner: there is a contact form on the brief. Use it. We will not accept capital under any specific structure until the structures exist; we will engage seriously with anyone thinking about which path matches their thesis.
Why this might be a real economy, not a speculation game
Most "fact-checking" or "trust" projects either operate on donations alone or run a speculation token disconnected from real services. Both shapes have known failure modes. The donation-only model produces fragile organisations of limited scale. The speculation-token model produces the Civil / Po.et / Bitpress lineage of failures.
Veritas's economic design copies a different lineage: oracle-services networks like Chainlink, Pyth, and UMA. These networks book service revenue when AI systems, smart contracts, and applications pay for verified data. The validators (called node operators in oracle parlance, validators in Veritas's) earn a 60–70% share of fees. The token, where present, is the medium of exchange — closer to a stable-asset substitute with utility demand than to a pure speculation instrument.
Whether the demand actually materialises at the scale Veritas projects is the open question. The critical review flagged it as the single biggest risk, and the oracle-economy research on this site quantifies what realistic demand looks like at each scale stage by reference to comparable networks.
If the demand materialises, four economic layers exist within the protocol. Each is a distinct investment thesis with distinct unit economics, distinct risks, and distinct timing.
The four investment paths — at a glance
| Path | What you fund | Realistic in | Risk profile | Upside shape |
|---|---|---|---|---|
| 1 · Verification centers | Operating institution / staff / infra | Phase II (months 6–18) | Low–medium | Service-business margin, modest stable revenue |
| 2 · Utility token | Network medium of exchange | Phase II launch | High (regulatory + adoption) | Network-value growth if successful; failure modes are real |
| 3 · Verification-services tooling | Tooling firms serving validator centers | Phase III (18+ months) | Medium–high | Vertical-SaaS multiples |
| 4 · CPML application layer | Consumer + B2B apps that consume CPML | Phase III | Medium–high | Broadest market; most dependent on protocol success |
| 5 · End-user anti-hallucination products | Prosumer plugins + extensions + AI-assistant overlays | Phase II–III | Medium | Direct B2C / prosumer revenue independent of AI-lab adoption |
| 6 · Defensive patent portfolio | Patent prosecution + licensing entity holding necessary IP | Phase I (urgent — public-disclosure clock running) | Low–medium (defensive) | Asymmetric: prevents third-party blocking; modest licensing income |
Detailed analysis of each follows. For a less technical version, see the plain-English version.
Path 1 — Invest in verification centers
Operate or fund institutions that perform attestation work and earn the protocol's 60–70% revenue share.
Comparable to investing in specialised research firms, journalism cooperatives, or library-affiliated trust-services businesses. Service-margin business with growing demand if the protocol succeeds.
What you fund
Operating costs of an institutional validator. Editorial staff (typically 2–3 senior reviewers + 1–2 junior staff at Phase II scale). Infrastructure (signing-key custody, libp2p gossip nodes, evidence-storage retention, attestation submission). Legal coverage (defamation insurance, jurisdictional compliance, retraction-handling procedures). Audit, quality assurance, and reputation-management overhead.
What you earn
Revenue per validator scales with verification volume, weighted by reputation rank in chartered consensus domains. At Phase II base-case (~12 institutional validators sharing ~$1.2M/year of validator-side compensation), individual validator revenue lands at approximately $80–120K/year. At Phase III base-case (50–80 validators sharing ~$11M), ~$135–220K/year. The optimistic case is materially larger but contingent on AI-laboratory revenue at the scale documented in the oracle-economy research.
Investment shape
- Endowment — donate to an existing institution to fund its verification arm. Tax-deductible in many jurisdictions. Returns are mission impact, not financial.
- Equity in a for-profit verification firm — fund a new specialised verification firm (deep-dive investigative cooperative, specialised topic-domain firm, regional fact-check). Returns are equity in a service business with growing addressable market.
- Hybrid — B-corp or non-profit operating with revenue feeding mission programs. Several IFCN signatories operate roughly this way today.
Failure modes
- Volume doesn't reach projected scale (AI-laboratory adoption risk).
- Defamation lawsuits in hostile jurisdictions exceed insurance.
- Reputation damage from a single bad attestation collapses an individual center's economic position.
- Validator labour market saturates faster than revenue grows; per-validator economics degrade.
Path 2 — Utility token
Hold the token used by AI laboratories, websites, and content publishers to pay for protocol services.
Comparable to early Chainlink LINK, Pyth, or UMA token positions. Service-payment utility, not speculation. Token value scales with service volume and the protocol's revenue capture.
What the token is
A utility token following the Chainlink service-payment pattern. AI laboratories pay for grounding queries with it. Websites pay for verification certificates with it. Validators earn it for attestation work and convert to stable assets via the treasury. The current v0.2 design includes a treasury-backed buyback mechanism (validators burn tokens for stable assets at a treasury-set rate). The next paper version (v0.3) may drop the burn mechanism on regulatory advice from the aegis + quant analysis — in which case the treasury pays validators in stable assets directly and the token is purely consumable.
The investment thesis
Token value scales with protocol service volume. Higher revenue → higher token demand → (if burn enabled) lower circulating supply → asset value growth. If burn is not enabled, token utility-demand alone supports value (you must hold tokens to consume protocol services).
Equilibrium token-value-per-revenue ratios for comparable oracle networks are documented in the oracle-economy research. Briefly: Chainlink-comparable revenue stages map to ~$30M–$300M+ market cap depending on revenue capture rate and float assumptions.
Failure modes — these are real
- Regulatory reclassification — biggest single risk. SEC, MiCA, FINMA, MAS rulings could force structural changes after launch.
- Burn mechanism is the weakest legal link — quant analysis recommends dropping it before Phase II launch. If the foundation goes ahead with burn anyway and faces enforcement, the value-bridge collapses.
- Token-economic capture — whales accumulate enough supply to pressure treasury parameters. Mitigations are designed in but unproven.
- AI-lab thesis fails — token utility demand stays small, secondary-market liquidity stays thin.
- Failed-predecessor pattern — Civil, Po.et, Bitpress, Factmata. We've designed against the specific failure modes; designing against history doesn't guarantee not repeating it.
Path 3 — Verification-services tooling
Equity in firms that build AI-augmented verification tooling for institutional validators.
One layer above verification centers. The centers do the work; tooling firms build the AI-assisted software that lets centers do verification 5–10× faster. Comparable to legal-tech serving law practices, or revenue-cycle-management vendors serving hospitals.
The opportunity
Verification labour is the bottleneck. Per-claim expert time at current rates: 30 minutes to several hours for non-trivial claims. Internet claim-production rate is many orders of magnitude higher than any plausible validator workforce. Even with the protocol's design (verification is incentive-routed, not exhaustive), there is a labour gap that closes only with tooling.
AI-assisted verification — done with humans in the loop and outputs reviewed — multiplies throughput. Specialised tooling for: claim atomisation (extracting individual factual claims from prose); source tracing (following citation chains, flagging broken links and retracted papers); evidence cross-checking (querying trusted databases — PubMed, court records, government filings); first-draft attestation generation; anomaly detection (sock-puppet validators, fabricated provenance graphs, bad-faith commissioning patterns).
Investment shape
Equity in seed/Series-A startups serving the verification-center market. Roughly comparable to legal-tech venture investment. The customer is the verification center; the protocol creates the addressable market.
Timing
This layer cannot exist until 20–50 verification centers are operating with clear pain points. Realistically a Phase III opportunity (18+ months from now), not Phase II. Earlier-stage investment can fund proof-of-concept work but commercial traction requires the protocol's institutional layer to exist first.
Failure modes
- Verification centers build tooling in-house rather than buy.
- Open-source tools dominate; commercial returns thin.
- The protocol doesn't reach enough verification centers to support a vendor ecosystem.
- The AI models needed for high-quality verification themselves hallucinate (the original problem repeating one layer up).
Path 4 — CPML application layer
Equity in consumer apps, websites, browsers, AI assistants, and search products that consume CPML to render dynamic, profile-aware experiences.
Compare to early App Store, early WordPress, early OAuth/Stripe. A new infrastructure primitive enables an ecosystem of products that depend on it. Risk profile resembles seed-stage consumer SaaS.
What this layer would look like
Once CPML reaches scale (~100K user profiles, distributed across communities), every reading and reasoning experience can become profile-aware:
- News-reader apps that filter / annotate stories by your trusted community's verifications.
- Browser extensions that overlay verification badges across all webpages.
- AI assistants that ground responses in your chosen consensus profile (your scientific community, your historical frame, etc.).
- Search engines that re-rank by which sources your CPML trusts most.
- Education platforms adapting content to the learner's stated epistemic frame.
- Research tools showing how a claim is regarded across multiple specialty consensuses.
- Civic / policy products surfacing where consensus exists vs where contestation is real.
- Specialised vertical applications: medical, legal, financial — each with domain-specific verification rules.
Investment shape
Equity in consumer-software or B2B-SaaS startups. Risk and timing resemble seed-stage products built on a new primitive. Returns are correlated with protocol success; specific applications win or lose on product-market fit.
Failure modes
- Application layer doesn't materialise because the underlying protocol fails to reach critical mass.
- Big platforms (Google, Apple, Microsoft) build similar functionality natively, capturing the value.
- Products built on CPML get out-competed by products with proprietary trust signals that have larger network effects.
- Consumer adoption of profile-aware reading turns out to be smaller than the CPML thesis assumes.
Path 5 — End-user anti-hallucination products
Equity in firms building consumer-facing or prosumer anti-hallucination products that ground AI-assistant output against Veritas — independent of whether the AI lab itself integrates.
This is the B2C / prosumer layer. Browser extensions that overlay verification status on every AI assistant a user interacts with. Desktop or mobile companion apps. Specialised prosumer tools for journalists, researchers, students, lawyers, doctors. Direct revenue from users — not from AI labs.
Why this is a distinct path from Path 1 (AI-lab integration)
Path 1 (verification centers earning from AI-lab subscriptions) depends on AI laboratories themselves agreeing to integrate. The critical review flagged this as the single biggest project risk: there is no precedent for frontier AI labs paying for third-party grounding at the projected scale.
Path 5 routes around that dependency. End-users want their AI assistants to hallucinate less even if OpenAI, Anthropic, and Google never integrate Veritas natively. A browser extension that:
- Watches the user's interactions with ChatGPT, Claude, Gemini, Copilot, Perplexity, etc.
- Extracts the factual claims from the AI's output.
- Queries Veritas in the background under the user's CPML.
- Overlays each claim with a verification badge: verified, contested, unsupported, retraction-pending, source: did:web:university.example.
- Optionally rewrites or annotates the AI's reply with grounding citations the user can click through.
This product can ship as soon as the Veritas read API is operational (Phase II). It does not require AI labs to do anything different. The user pays a subscription; the user's AI experience improves; the validator network earns service revenue from the user-side queries.
Who builds these products and why
The companies most likely to build them, ranked by motivation:
- Existing extension / add-on businesses — Grammarly, NotebookLM-style productivity tools, Readwise, Pocket, AllSides — vendors with installed prosumer base who can attach Veritas as a feature.
- Specialised vertical-AI product teams — legal-AI (Harvey, Casetext, etc.), medical-AI (OpenEvidence, Glass Health), journalism-AI (Otter, Grouply, Braver Angels). Each has a verification-sensitive user base.
- Independent productivity startups — new entrants who position themselves explicitly as "AI grounding done right."
- Browser vendors and AI-overlay platforms — Brave (which already has a privacy-first user base), Arc, Mozilla (in partnership), specialised AI-browser projects like Sigma OS or Dia.
- Open-weight-model serving platforms — Hugging Face (Spaces and Endpoints), Together AI, Replicate, Fireworks. Platforms whose business model already routes around frontier-lab proprietary stacks; integrating Veritas is a competitive feature.
Unit economics
Pricing models that fit the prosumer market:
- Freemium — free tier with verification on the user's most-clicked claims; paid tier ($5–15/month) for unlimited verification + custom CPML + provenance-DAG inspector.
- Vertical SaaS — for journalism, law, medicine, education: $25–150/month per professional user, with verification as a primary feature alongside specialised CPML domain support.
- Enterprise add-on — extend existing AI-assistant or knowledge-management products with Veritas-grounded checking; B2B contracts at $10K–500K/year per customer.
From a Veritas economic perspective, Path 5 routes service revenue to validators through end-user-paid product subscriptions. A prosumer plugin charging $10/month per user across 100,000 users yields $12M/year top-line; if 30% of that flows through the Veritas API as service fees, the protocol receives $3–4M/year just from that one product. Multiple Path 5 products in parallel can plausibly produce $5–20M/year of service-fee inflows independent of AI-lab adoption.
This is a meaningful insulation against the Path 1 risk: even if no major AI lab integrates, the user-side product layer can route real money to validators.
Investment shape
Equity in seed-to-Series-B prosumer SaaS startups. Risk profile resembles consumer / prosumer software (browser extensions, productivity tools, vertical AI) — the well-known venture pattern for products at this stage. Returns are correlated with both protocol adoption and the specific product's distribution.
What could go wrong
- Big AI labs build native verification. If Anthropic, OpenAI, or Google ship in-house grounding that competes with prosumer plugins, Path 5 products are squeezed. Mitigation: Path 5 products differentiate on user-owned CPML (your verification, your way), which big labs structurally won't replicate because they can't own the user's preferences.
- Plugin adoption is slow. Browser-extension installs are a famously hard distribution channel. Mitigation: vertical SaaS (legal, medical, journalism) has stronger distribution and higher willingness-to-pay.
- Veritas's API isn't reliable enough at Phase II. Early API instability hurts dependent products. Mitigation: Phase II commits to a service-level objective; prosumer-product partners get early-access tier.
- Privacy concerns. A plugin that watches your AI conversations is a high-trust ask. Mitigation: client-side query construction, no centralised logging, transparent CPML handling, optional fully-local deployment for the highest-trust customers.
Path 6 — Defensive patent portfolio
Fund the prosecution and ongoing maintenance of a defensive patent portfolio covering the protocol's novel mechanisms — to prevent third parties from filing on the same primitives and blocking the protocol's open development.
The patent path is asymmetric: low capital outlay relative to other paths, but materially protects the entire economy if the protocol succeeds. Comparable to the role Open Invention Network plays for Linux, the Apache Software Foundation IP grants for open-source web stacks, and the W3C patent policy for web standards.
Why this is urgent and underappreciated
The working paper has already disclosed several novel mechanisms publicly. In most jurisdictions, public disclosure starts a clock — typically 12 months in the United States (grace period under 35 U.S.C. § 102(b)(1)), zero in Europe under the European Patent Convention (Article 54). Material that was novel before the v0.2 publication date in April 2026 may still be patentable; material disclosed publicly without filing has begun losing patentability in EPO-bound jurisdictions immediately.
If a third party — a hostile competitor, a patent troll, or a well-resourced AI laboratory — files patents on the same mechanisms first, the Veritas Protocol's entire economy becomes legally fragile. They could license restrictively or sue protocol participants. Defensive prosecution prevents this: the foundation files first, then commits the patents to a defensive-only licensing scheme that protects the open-source ecosystem.
The mechanisms that are likely patentable
A preliminary novelty review of the working paper identifies the following candidate filings. Each requires a freedom-to-operate search before drafting; some may be unpatentable due to prior art, but the portfolio across all of them is the thesis, not any single one.
- Cascading falsification with K-scaling quorum. Method for propagating retraction events through a claim-dependency graph with the validator-quorum requirement scaling with value-at-stake (sentinel §2.1 mitigation). Likely novel; closely related to Doyle 1979 truth-maintenance but with the value-scaled-quorum addition.
- Source-authenticated retraction. Cryptographic method for verifying that a retraction event originates from the source of record (closes the ~$100K source-forgery attack from sentinel §7.1). The mechanism — multi-sig from the original source authority + transparency-log inclusion — appears genuinely novel.
- CPML composition with VAF-audience semantics. Method for client-side composition of per-consumer verdicts from a plural attestation pool, parameterised by a user-owned profile expressing ordinal value-orderings. Specific implementation details around the resolver pipeline (RFC-CPML § 9) are patentable in software-patentable jurisdictions.
- Investigation-market matching with reputation-weighted auto-assignment. Method for routing investigation commissions to qualified validators with anti-collusion enforcement, jurisdictional-diversity requirements, and reputation-floor thresholds. Distinct from existing prediction-market and dispute-resolution patents.
- Signed starter-CPML registry with trojan-defence. Method for federation-signed publication of starter consensus profiles such that clients refuse to load unsigned profiles by default. Anti-phishing for the epistemic layer (sentinel §6.1).
- Reputation math for permissionless validators with cohort-aware weighting. EigenTrust-variant algorithm with state-actor-credential awareness, jurisdictional-diversity bonus, and divergence-detection scoring.
- Investigation-market pricing dynamics. Algorithm for setting per-tier fees based on claim controversy, queue depth, and validator availability with diminishing-returns suppression of well-resourced muddying.
- Cold-start validator-pool reputation bootstrapping. Protocol for the 12-month period before reputation math has sufficient signal — anchoring on a pre-credentialed institutional cohort with explicit sunset.
- Anti-hallucination plugin architecture. Method for client-side claim extraction from AI-assistant output, background CPML-scoped verification queries, and inline UI overlay of verification status (Path 5 architecture).
- Multi-frame verdict-rendering UX patterns. Specific UI methods for surfacing "show-disagreement" composed verdicts, opposite-view daily prompts, and provenance-DAG inspection.
Estimated portfolio size: 10–20 filings across US, EU (unitary patent), Japan, and China. Some claims will fail on prior art; the surviving subset is the operating portfolio.
Cost economics
| Stage | Cost per patent (US) | Cost per patent (EU unitary) | Notes |
|---|---|---|---|
| Provisional + drafting | $8K–15K | $10K–18K | Initial filing; locks priority date. |
| Non-provisional + prosecution | $15K–35K | $15K–30K | Office-action responses, claim amendments. Multi-year process. |
| Issuance + first-year maintenance | $3K–8K | $5K–10K | One-time issuance fee + annual. |
| 20-year maintenance (ongoing) | $20K–60K aggregate | $30K–80K aggregate | Maintenance fees rise over time; can lapse if no longer strategic. |
| Total per patent (life of portfolio) | ~$50K–120K | ~$60K–140K | Higher in software-heavy jurisdictions and contested fields. |
Portfolio cost estimate: $1.5M–3M total over the life of the portfolio for 10 filings across US + EU + JP. Most spend is front-loaded in the first 5 years (drafting + prosecution); maintenance is the long-tail cost.
This is small relative to the other paths' capital requirements but the timing is uniquely urgent. Filings ideally start in Q3-Q4 2026, before the European 0-day-disclosure clock fully runs out on the v0.2 disclosures and before US 12-month grace expires (April 2027 hard deadline for material disclosed in the April 2026 paper).
Investment shape
Three structural options:
- Foundation-held defensive portfolio. The Veritas Foundation files the patents and commits them to a defensive-only license under one of the established frameworks: Open Invention Network (OIN — Linux model), Patent Pledge (Twitter's Innovator's Patent Agreement), Defensive Patent License (DPL), or W3C Royalty-Free Patent Policy. Investor return: foundation-equity-equivalent (board seat, governance influence) plus the option of a "patent participation certificate" tied to enforcement-driven income.
- Separate IP-holding entity. A purpose-formed legal entity (typically Delaware C-corp or Swiss AG) holds the patents and licenses them on a per-use-case basis. Veritas Foundation receives a perpetual royalty-free defensive license; commercial actors using the IP outside the protocol's open ecosystem pay licensing fees. Investor return: equity in the IP-holding entity. This shape is the "Sun Microsystems patent fund" model — controversial; can drift away from defensive-only intent if the IP entity becomes financially independent.
- IP-pool consortium. Multiple parties (foundation, partner institutions, AI labs that integrate, validator coalitions) co-file and co-hold the patents under a consortium structure. Modelled on MPEG-LA / Avanci patent pools. Highest legitimacy; most operational complexity.
Recommended path: Option 1 (foundation-held defensive portfolio). Lowest operational complexity. Strongest alignment with the protocol's open-source and public-interest commitments. Investor returns are modest but the strategic value to the protocol is asymmetric.
Returns analysis
Defensive patent portfolios are not high-IRR investments by themselves. The return is the prevented downside: a hostile blocking patent on the same primitives could force the protocol to redesign or license restrictively, costing $10M–$100M+ in lost optionality. A $2M defensive portfolio that prevents that exposure is a 5×–50× implicit return on capital, but the return is realised as continued protocol existence rather than as cash flow.
Cash returns from defensive licensing programs are typically modest — Open Invention Network's annual licensing income is in the low tens of millions across thousands of patents. A 10-patent Veritas portfolio under a similar model might book $50K–$500K/year in licensing fees from non-aligned commercial users. Below cost-of-capital. The return is structural protection, not yield.
What could go wrong
- Defensive-only commitment slips. An IP-holding entity with patents and falling foundation funding could face pressure to enforce offensively. Mitigation: irrevocable patent commitments to OIN-style frameworks; charter prohibition on enforcement against defensive-licensee good-faith users.
- Patents are issued but unenforceable. Software-patent eligibility has narrowed substantially under Alice Corp. v. CLS Bank International (2014) and subsequent US case law. Some claims may issue but fail at enforcement. Mitigation: drafting strategy targets the more eligible categories (cryptographic methods, specific data-structure-plus-process combinations) over abstract "method of doing X" claims.
- Patent trolls file first. Continuation strategies, abusive prior-art interpretation. Mitigation: rapid filing on the highest-priority mechanisms; defensive publication of the rest (forces prior-art consideration).
- EPO refuses on technical-character grounds. Many software-flavoured Veritas mechanisms struggle with EPO's "technical character" requirement. Mitigation: file in US first (broader software-patent scope), accept EPO holes for some claims, file under unitary patent only for genuinely-technical mechanisms.
- Foundation governance capture. An IP-holding entity is a high-value target. Mitigation: defensive-license irrevocability; multi-stakeholder board with sector caps; OIN-style external pool participation.
Why this matters now (timing argument)
The working paper has been public since April 2026. In Europe, that disclosure has already started the 12-month-grace clock running on novelty. If Path 6 is going to happen, drafting must start within the next 3–6 months. Beyond that window, the most-novel mechanisms (cascading-falsification quorum, source-authenticated retraction) lose European patentability without exception.
This urgency is unique to Path 6. The other five paths can wait for Phase II or III. Path 6's window is closing faster than the protocol's own timeline.
Unit economics
Detailed unit-economic models for both crypto-native and non-crypto setups are in research-oracle-economy.md. Briefly:
Per-attestation economics
- Cost to validator: $5–80 per attestation, depending on depth (15-minute spot-check vs multi-hour deep investigation).
- Cost to chain (gas): $0.001–0.01 on Base/Optimism (current 2026 rates).
- Revenue per attestation (from AI-lab subscription pool, certificate fees, etc.): $0.50–5 per query; investigation-market commissions priced separately at $300–10K+ per case.
Per-AI-lab subscription economics
- Pricing tiers: Research/pilot ($50–500K/year), Production basic ($500K–2M/year), Production high-volume ($2–20M/year).
- Comparable benchmarks: Vectara, Pinecone, OpenAI grounding APIs at similar revenue stages.
Per-investigation-commission economics
- Tiers: Quick ($300, 2 validators, 1–3 days), Standard ($1K, 3 validators, 5–10 days), Deep ($3K, 5 validators, 2–4 weeks), Extended ($10K+, multi-week original-source acquisition).
- Validator share: 60–70% of escrow, distributed by reputation-weighted voting on completed work.
- Foundation share: 5–10% of escrow for operations.
- Public-interest fund contribution: 10% routed to under-resourced-claim commissioning.
Treasury-flow model — simplified
Inflows: AI-lab subs + certificate subs + investigation commissions + content-publisher priority fees + donations. Outflows: validator compensation (60–70%) + operations (20–30%) + reserves (10–15%). Reconciles with the three-tier revenue scenarios in the working paper §8.
Oracle-economy comparables
The Veritas economy most closely resembles existing decentralised oracle networks. Detailed analysis of comparables in research-oracle-economy.md. Headline observations:
Chainlink (LINK)
The closest direct analogue. Service-payment token, no burn, validators (node operators) paid per service. Has built service-fee revenue from on-chain integrations (CCIP, Functions, VRF, Data Streams). At Year-4-5 had reached ~$30M annualised fee revenue. Larger by Year 7. Token market cap currently in the multi-billion range. Operator economics: top node operators run substantial businesses; smaller operators run lean specialised practices.
Pyth Network (PYTH)
Pull-oracle model — data publishers (often market-makers and exchanges) push, applications pull on demand. Stake-Per-Pull revenue capture. Data-publisher economics differ from node-operator economics; publishers are paid for unique data not for general verification. Less directly applicable to Veritas.
UMA (UMA)
Optimistic oracle with dispute-resolution via DVM. Disputes are rare but expensive. The March 2025 Polymarket Zelenskyy-suit incident is canonical: $7M outcome flip from a token-economic attack. UMA has consistently struggled to grow revenue beyond a thin slice; the structural challenge is that dispute revenue concentrates around contested events, which are episodic.
Witnet (WIT)
Dual-token: non-transferable reputation + transferable utility. Has remained small. Instructive on how dual-token systems handle validator alignment.
Kleros (PNK) and Reality.eth
Court-style dispute resolution. Juror PNK economics are real but small-scale. Reality.eth's escalation-game truth oracle provides a strong template for Veritas's dispute panel + cascade-quorum design.
What Veritas inherits — and where it differs
Veritas's model is closest to Chainlink's service-payment pattern. The key differences: Veritas's validators are institutional (universities, libraries) rather than crypto-native node operators; Veritas's customer base is AI laboratories rather than DeFi protocols; Veritas's atypical mechanism is the investigation market, which has no precise oracle-network analogue but maps loosely to Reality.eth's escalation game.
Realistic Year-3 unit-economic story for Veritas: comparable to Chainlink's Year 3-4 stage — ~$10–50M annualised service revenue if AI-lab integration materialises. The $695M–$2.78B optimistic projection from the paper is achievable only at multi-year compound growth from a base case in this range.
Full comparison and rationale: research-oracle-economy.md.
Risks — concentrated
- AI-laboratory adoption is the load-bearing dependency. If no major lab integrates with measurable hallucination-reduction results, the validator-compensation model has to shrink dramatically. Critical review documents this as the single biggest project risk.
- Token regulatory exposure. The current burn-to-stable-asset mechanism is the weakest Howey-test link. Quant analysis recommends dropping it before Phase II launch.
- Adversarial-cost floors are too low for value-at-stake. Sentinel's threat model identified five critical attack scenarios at $100K–$5M cost — small relative to the expected service volume the protocol will mediate.
- Foundation governance capture. Five editorial surfaces are foundation-controlled. v0.3 reduces this; full reduction is v0.4+.
- Pluralism-coherence is unfinished. The philosophical position has not yet been discharged against standard objections. Sophisticated foundations will catch this on first reading.
- Investigation-market asymmetry. Well-funded actors can flood with muddying commissions; under-resourced parties get drowned. Public-interest fund partially mitigates but doesn't eliminate.
- Failed-predecessor lineage. Civil, Po.et, Factmata, Bitpress all failed in adjacent shapes. Designed-against-history is not a guarantee.
How to engage now
The protocol is in v0.2 working-paper stage. None of the four paths is currently accepting investment under any specific structure.
Productive engagement now:
- Foundations and grant-makers with programmes in open infrastructure or journalism-tech can fund the v0.3 paper round (~US$ 80–150K, 12–13 weeks) via the foundation when it is incorporated.
- Universities, libraries, fact-check organisations can sign a letter of intent to operate a Phase II validator. We need 5–10 institutions across 3+ jurisdictions.
- AI laboratories can engage on a research-pilot basis. The Phase II AI-lab integration is the determining empirical test of the entire economic model. Anthropic, Mistral, Hugging Face, open-weight model communities are all plausible partners.
- Crypto-native infrastructure investors can engage on the token-and-architecture path — initially through the v0.3 design conversations rather than a token round.
- Application-layer entrepreneurs can begin work on CPML-aware products that will operate against the test network in Phase II.
The contact form on the brief is the way in. Role dropdown includes "Foundation / funder / grantmaker," "AI laboratory," "University / library / research institute," "Fact-checking organisation," and "Developer / engineer." Pick the one that fits and tell us your thinking. Two-week response target.
The investment paths describe future structures, not present offerings. The protocol is in v0.2 working-paper stage. v0.3 must address the critical-review findings. v0.4 ships a working implementation. Any actual fundraising structures live downstream of those steps.
This page exists so people thinking seriously about the space can understand what the eventual structures would look like — and where the real risks live. It does not invite capital under any current structure.
Brief · Working paper · Critical review · Plain-English version · Oracle-economy research