# 05 — Revenue Model: Six Streams + Treasury

> *Diversified revenue across consumers, enterprises, content producers, and disputes. No single stream is load-bearing; the treasury smooths the difference.*

## The six streams

1. **AI-laboratory grounding subscriptions.** AI labs pay for high-volume access to the signed, domain-scoped claim corpus. Flat-rate tier for research access; usage-metered tier for production inference.

2. **Website certificate subscriptions.** Sites that publish `/factcheck.json` pay a subscription fee to maintain their Veritas certificate. Analogous to HTTPS certs in the paid-tier era (Let's Encrypt didn't exist; certs cost real money). Tiered by site size.

3. **User / app subscriptions.** End users or apps that want premium access — deeper lookups, priority queue, multi-CPML composition, browser extension with advanced features — pay a subscription.

4. **Content-producer priority verification fees.** Publishers that want their claims verified quickly (e.g., breaking news) pay a priority-lane fee. Normal-queue verification remains free or grant-funded; priority is user-funded.

5. **Investigation commissions.** When a claim is contested, parties pay to commission a formal investigation. The commissioned validators execute, post evidence, return a verdict. See `06-investigation-market.md` for the full mechanism.

6. **NGO / foundation / philanthropic donations.** Ongoing grants from foundations (Mozilla, Knight, MacArthur, Ford, Protocol Labs / FFDW, Wellcome), major gifts, NGO funding for specific verification centres.

## Architecture

Buyers pay in USDT, fiat, or the utility token (`VRT`). All routes deposit value into the treasury:

```
  BUYERS (labs, sites, users, producers, parties, NGOs)
                     │
                     ▼
               TREASURY (USDT + fiat + VRT holdings)
                     │
     ┌───────────────┼────────────────┐
     ▼               ▼                ▼
 VALIDATORS     OPERATIONS         RESERVE
 (compensation) (foundation, chain, (stability fund,
               infra, legal, audit)  reg contingency)
```

Treasury policy is published, auditable, and parameterised by board-approved rules (not ad-hoc). Core parameters:

- **Validator compensation share** — target 60–70% of inbound revenue routed to validators on a per-work basis.
- **Operations share** — 20–30% to foundation ops (governance, spec maintenance, reference implementations, dispute panel, standards-body engagement).
- **Reserve share** — 10–15% held as stability fund; used to smooth revenue volatility and fund one-time regulatory or legal contingencies.

## Scenario projections (order of magnitude)

### Phase II launch (month 18 steady state)

| Stream | Volume assumption | Annual revenue |
|---|---|---|
| AI-lab grounding | 1 lab, research-tier contract | ~$500K |
| Certificates | 200 sites × $200/year | ~$40K |
| User subscriptions | 2,000 at $60/year | ~$120K |
| Priority verification | 500 fees × $400 | ~$200K |
| Investigation commissions | 300 × $1,500 avg | ~$450K |
| Donations | 2 foundations | ~$500K |
| **Total** | | **~$1.8M/year** |

Validator budget at 65%: ~$1.2M → 12 validators at $100K/each = feasible institutional engagement.

### Phase III scale-up (month 36)

| Stream | Volume | Annual revenue |
|---|---|---|
| AI labs | 4 labs, production-tier | ~$5M |
| Certificates | 5,000 sites × $200 | ~$1M |
| User subscriptions | 50,000 × $60 | ~$3M |
| Priority verification | 5,000 × $400 | ~$2M |
| Investigation commissions | 3,000 × $1,500 | ~$4.5M |
| Donations | Steady | ~$1.5M |
| **Total** | | **~$17M/year** |

Validator budget: $11M → 50–80 institutional validators at meaningful stipends. Remaining supports full operational build-out.

These scenarios are not projections; they are *illustrative capacity analyses*. Actual numbers depend heavily on AI-lab uptake (biggest lever), investigation-market price discovery, and certificate adoption curve.

## Why diversified matters

A single revenue source (whether AI-labs, whether donations, whether certificates) creates capture risk and volatility. Mozilla learned this when Google-search-deal dependency dominated its revenue. Signal Foundation's donation-only model is sustainable but fragile. Wikimedia's donation-only model is genuinely sustainable but took two decades to reach scale.

Veritas's diversified mix is designed so no single stream exceeds ~40% of revenue at steady state. This protects:

- **Editorial independence from AI labs** — labs are customers, not owners.
- **Editorial independence from states** — NGO donations are not tied to policy demands (unlike direct state funding).
- **Editorial independence from subscribers** — individual subscribers have no aggregated leverage.
- **Editorial independence from content producers** — priority fees do not buy verdicts, only queue-position.

Treasury-held reserves allow the foundation to decline any individual customer relationship that conflicts with its mission, without endangering validator compensation.

## Critical analysis

**1 — AI-lab revenue may not materialise in the timeline assumed.** This is the biggest uncertainty. Mitigations: pilot contract in Phase II under research grant rather than market-rate; publish measurable benchmark data regardless; have donation-only operational floor if labs do not sign.

**2 — Certificate fees may cannibalise the "free self-declared" ClaimReview baseline.** If sites can publish ClaimReview for free via Google's existing markup tools, why pay Veritas? Answer: Veritas certificates add *third-party validator attestations* on top of self-declaration, plus cross-site cascade propagation — real additional value. Pricing is tuned accordingly.

**3 — Investigation-market revenue depends on adversarial commissioning at scale.** If parties do not contest each other's claims at the predicted rate, this revenue stream is smaller than modelled. Mitigation: start with lower volume assumption; re-tune after 12 months of real data.

**4 — Donations are lumpy, not smooth.** A major foundation cycle is 2–4 years. Veritas must never be in a position where a single donor exit kills the operation. Mitigation: multi-year grants budgeted with 6-month cliff resistance; reserve share sufficient to cover one year of ops.

**5 — User subscriptions require consumer distribution.** The consensus-quiz MVP (`07-quiz-mvp.md`) is the primary consumer acquisition channel. If MVP traction is poor, subscription revenue collapses.

**6 — Priority-verification fees could be seen as pay-for-favourable-outcome.** Response: priority pays for *queue position*, not for verdict direction. Published queue + fee schedule + audit trail prevents the protocol from being seen as rigged. If the design is not bulletproof on this, priority-verification should be dropped rather than allowed to discredit the protocol.

**7 — Token-based revenue adds complexity.** Response: all streams can be paid directly in USDT / fiat. Token is a convenience layer for crypto-native participants, not a requirement. If the token legal posture worsens, token is de-emphasised without killing the revenue model.

## Funding-source targets (specific)

### Phase I (0–6 months, ~$200–300K)

- **Drow / Nous self-funded + small seed** — prototype aggregator, reference implementations.
- **Foundation exploratory grant** — Mozilla MIECO ($30–50K), Protocol Labs early-stage ($50K), or individual major gifts.

### Phase II (6–18 months, ~$400–600K)

- **Major foundation grant** (Knight, MacArthur, Mozilla Technology Fund, Ford). One primary grant $300–500K.
- **AI-lab pilot contract** — research-tier agreement with 1 lab, $100–300K.
- **Institutional in-kind contributions** — university libraries + research groups covering their own validator operating costs.

### Phase III (18 months+, $1M+ scaling to $5M+)

- **Multi-lab AI-grounding contracts.**
- **Certificate-subscription consumer launch.**
- **Investigation-market volume ramps.**
- **Continued foundation support + major gifts.**
- **Country-chapter national funding streams** (chapter-specific NGO partnerships).

## Open questions

- What is the pricing elasticity of AI-lab grounding? We have no data; Phase II pilot is the first data point.
- Is the investigation market robust to being used adversarially by well-funded parties against under-resourced ones? (`06-investigation-market.md` addresses partially.)
- Do AI labs prefer per-call micropayment or flat-tier subscription? Both should be offered; steady-state mix is unknown.
- At what scale does the foundation need a dedicated treasury / investment committee distinct from the board?

## What we'd build

- **`veritas-billing`** — billing engine: subscription management, per-call metering, invoicing, receipt of fiat / USDT / VRT.
- **`veritas-treasury-dashboard`** — public, read-only view of treasury holdings, revenue by stream, disbursements, ratios vs policy.
- **`veritas-grant-portal`** — NGO / foundation-facing donation flow with structured reporting.
- **`veritas-certificate-api`** — issuance, renewal, revocation for site certificates.
- **`veritas-priority-queue`** — tiered verification queue with audit trail.
- **`veritas-investigation-market-contract`** — see `06-investigation-market.md`.
- **Published treasury policy document** — ratios, rebalancing rules, exception procedure.
- **Annual treasury audit** — independent financial audit, published.
