ELI5 home The full proposal What critics said What we're fixing The thinking behind it Where to invest Common questions
Plain English · The critical review Technical reports →

What four critics said about Veritas — in plain English.

We asked four independent reviewers to find problems with the proposal. They did. This is what they said, translated for normal humans. Nothing was softened.

~ 8-minute read · Honest · Print-friendly

1Why we did this

Most projects publish a polished pitch and bury the criticism. We wanted the opposite. After publishing the v0.2 working paper, we dispatched four independent reviewers — each with a different specialty, each told to find problems, not reasons the design is good.

They wrote 320 KB of detailed critique across four reports. We synthesised them into a master review and published all five documents on the same website as the proposal. Nothing was edited. Nothing was removed.

"The critique is a first-class artefact of the protocol, not a phase the protocol passes through on the way to polished release."

If a reviewer concluded the core idea has a hole, that conclusion stayed in.

2The four reviewers and what they found

Reviewer 1 — super-reviewer · the whitepaper auditor

Found 7 critical problems and 11 high-severity ones in the working paper.

Biggest finding: The economic model is built on a single pillar — payments from AI laboratories. The optimistic projection ($695M–$2.78B/year) and the realistic-scenario projection ($17M/year) differ by a factor of 40 to 400. The paper hand-waves at the difference instead of reconciling it.

Also flagged: a citation about a US court case had the wrong year (caught and fixed). A footnote about a 1996 web standard misstated what it did (caught and fixed). Several supposed primary references were missing or unverifiable.

7 critical · 11 high · 9 medium · 6 low
Reviewer 2 — athena · the philosopher

Found 3 critical and 4 high-severity logical / philosophical problems.

Biggest finding: The pluralism — the idea that "different communities check things differently and that's OK" — slides into philosophical relativism if you push on it. The paper says it's not relativism but doesn't engage with the standard philosophical objection (Boghossian's Fear of Knowledge). Sophisticated readers will catch this and decide the project is intellectually shallow.

Also: the protocol claims "no single authority" but the foundation actually controls five editorial surfaces. The CPML technical sketch cites a fancy formal framework (Bench-Capon's Value-based Argumentation Frameworks) but isn't actually an instance of that framework. That's "provenance laundering" — using an authority's name without doing the work.

3 critical · 4 high
Reviewer 3 — sentinel · the red-team / attacker

Found 5 critical attack scenarios.

Biggest finding: A motivated state actor with about $2 million per year could run 100 validator nodes in the system, building reputation honestly, then weaponising it. The protocol's defences (reputation math + jurisdictional diversity) work in steady state but are too weak in the early period when validators are scarce.

Other concrete attacks they identified:

Their core observation: "Permissionless-write with post-hoc reputation weighting is not defended against adversaries with patience and budget to earn reputation honestly before weaponising it."

5 critical attack classes · with specific cost estimates
Reviewer 4 — aegis · the editor / quality-assurance

Found 50 problems — version labels in wrong places, contradictions between documents, internal team identifiers leaking into public prose.

Biggest finding: The phase budget says one thing in the paper, a different thing in the supporting document, a third thing in the deck. The validator count drifts between 5, 10, and 12. The list of refused operations is worded differently in three places — and one of those three silently broadens what's refused.

Verdict: "Not ready for publication without a focused 2–4 hour copyedit round." Not a rewrite — but the editorial discipline is below what serious external partners expect.

5 critical · 14 high · 21 medium · 9 advisory

3The four big problems they all agreed on

Independent reviewers reaching the same conclusion is the strongest signal. Four reviewers, four briefs, but they converged on these:

Problem A — The money plan rests on one shaky leg

If AI laboratories pay for grounding access at scale, the project pays for itself. If they don't, the validator-compensation model collapses, and with it the promise that "mutually-hostile communities can both check each other's claims" (because nobody's getting paid).

This needs honest tiering: pessimistic ($5–15M/yr), base case ($50–150M/yr), optimistic ($500M+/yr). Plus a fallback model — what does Veritas look like if AI labs don't sign up at all?

Problem B — "Permissionless write" and "narrow refusals" contradict each other

Two of the five things the protocol refuses to record (credible threats; mass-casualty-weapon synthesis instructions) are topical, not operational. They look like content moderation in disguise. The paper either has to redraw the line or admit that some refusals are content-based and explain why.

Problem C — "No single authority" isn't true

The foundation controls at least five editorial surfaces. That's substantial editorial power. The honest framing is "distributed authority with five disclosed editorial surfaces with appeal paths" rather than "no single authority decides anything."

Problem D — Adversarial costs are too low

$2 million/year buys a state-actor presence; $500K–$5M can flip a cascade event; $2K can spam the write layer. These are small relative to what's at stake on important public claims. The defences need to scale with the value at risk, not stay flat.

4Sorting findings — answered, partly answered, still open

After the reviews came back, we went through every finding and asked: do we already have an answer? Does the v0.3 plan have an answer? Does the data we now have (especially the new oracle-economy research) resolve it? Or is it genuinely unsolved?

The findings split into three groups. The classification matters because "we have an answer but it's not yet shipped" is very different from "we don't yet know how to answer this and might never."

Has an answer (most of them)

Most findings have a clear response — either already shipped, committed in the v0.3 plan, or resolved by data:

Partly answered (direction set; full close needs more work)

Genuinely open (cannot promise to close in v0.3)

These are the things we will not pretend to have solved:

This split lets you read the critique without a false-precision impression. Most findings are addressable. A few are not — and we say which.

5What's already fixed

Some of the reviewer findings were small and specific. We did a copyedit round immediately after the review came back. Already in place:

6What's not fixed yet — and why

The substantive criticism — the four big problems above — needs structural design work, not just copyediting. That's the v0.3 plan. The plain-English version of the v0.3 plan covers it; the technical version has the full mapping of every reviewer finding to a workstream.

The summary of what v0.3 commits to:

Estimated calendar: 12–13 weeks. Cost: not zero. Result: a paper that survives serious peer review.

7What this exercise tells us about the project

Three observations a thoughtful reader might take from this critical review:

The substantive idea is respectable. No reviewer concluded the core proposal is unsalvageable. The architecture story is coherent. The use of mature open standards (Verifiable Credentials, IETF SCITT, libp2p, Sigstore, Ethereum L2s) is grounded. The novel parts — domain-indexed verdicts, cascading retraction, investigation market — are real contributions.

The editorial discipline is not yet at the level required to win serious partners. Foundations that fund open-infrastructure projects (Mozilla, Knight) will catch the 40-400× revenue gap on first reading. Senior philosophers will catch the unaddressed Boghossian objection. The next round of writing has to do better.

The team commissioned the criticism and published it. That's a signal. Most projects of this kind publish their pitch and let critics shout from outside. Veritas published the critics' reports on the same website as the pitch, with the same prominence, and is now publishing a v0.3 plan that addresses every CRITICAL and HIGH finding. Reasonable readers can decide what that's worth.

Veritas Protocol · Plain-English critical review · v0.2 · April 2026
ELI5 home · The proposal (plain) · v0.3 plan (plain) · Questions · Full reviewer reports